2026-04-07 01:22:14.405837 | Job console starting 2026-04-07 01:22:14.422435 | Updating git repos 2026-04-07 01:22:14.498677 | Cloning repos into workspace 2026-04-07 01:22:14.727044 | Restoring repo states 2026-04-07 01:22:14.750635 | Merging changes 2026-04-07 01:22:14.750667 | Checking out repos 2026-04-07 01:22:15.059619 | Preparing playbooks 2026-04-07 01:22:15.765646 | Running Ansible setup 2026-04-07 01:22:20.242809 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-07 01:22:21.016959 | 2026-04-07 01:22:21.017136 | PLAY [Base pre] 2026-04-07 01:22:21.034737 | 2026-04-07 01:22:21.034904 | TASK [Setup log path fact] 2026-04-07 01:22:21.065410 | orchestrator | ok 2026-04-07 01:22:21.083022 | 2026-04-07 01:22:21.083161 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-07 01:22:21.126789 | orchestrator | ok 2026-04-07 01:22:21.140372 | 2026-04-07 01:22:21.140486 | TASK [emit-job-header : Print job information] 2026-04-07 01:22:21.198201 | # Job Information 2026-04-07 01:22:21.198505 | Ansible Version: 2.16.14 2026-04-07 01:22:21.198641 | Job: testbed-upgrade-stable-ubuntu-24.04 2026-04-07 01:22:21.198733 | Pipeline: periodic-midnight 2026-04-07 01:22:21.198798 | Executor: 521e9411259a 2026-04-07 01:22:21.198877 | Triggered by: https://github.com/osism/testbed 2026-04-07 01:22:21.198920 | Event ID: 319058190fd34c37a7841e4813e72f7e 2026-04-07 01:22:21.209015 | 2026-04-07 01:22:21.209172 | LOOP [emit-job-header : Print node information] 2026-04-07 01:22:21.329411 | orchestrator | ok: 2026-04-07 01:22:21.329688 | orchestrator | # Node Information 2026-04-07 01:22:21.329727 | orchestrator | Inventory Hostname: orchestrator 2026-04-07 01:22:21.329752 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-07 01:22:21.329775 | orchestrator | Username: zuul-testbed06 2026-04-07 01:22:21.329796 | orchestrator | Distro: Debian 12.13 2026-04-07 01:22:21.329822 | orchestrator | Provider: static-testbed 2026-04-07 01:22:21.329843 | orchestrator | Region: 2026-04-07 01:22:21.329865 | orchestrator | Label: testbed-orchestrator 2026-04-07 01:22:21.329885 | orchestrator | Product Name: OpenStack Nova 2026-04-07 01:22:21.329905 | orchestrator | Interface IP: 81.163.193.140 2026-04-07 01:22:21.342114 | 2026-04-07 01:22:21.342254 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-07 01:22:21.865239 | orchestrator -> localhost | changed 2026-04-07 01:22:21.880835 | 2026-04-07 01:22:21.881002 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-07 01:22:22.986112 | orchestrator -> localhost | changed 2026-04-07 01:22:23.000551 | 2026-04-07 01:22:23.000692 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-07 01:22:23.274107 | orchestrator -> localhost | ok 2026-04-07 01:22:23.291261 | 2026-04-07 01:22:23.291439 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-07 01:22:23.316655 | orchestrator | ok 2026-04-07 01:22:23.334254 | orchestrator | included: /var/lib/zuul/builds/d530b3fc0d474923ab0f01e3ee8118aa/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-07 01:22:23.342486 | 2026-04-07 01:22:23.342638 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-07 01:22:24.862816 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-07 01:22:24.863329 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/d530b3fc0d474923ab0f01e3ee8118aa/work/d530b3fc0d474923ab0f01e3ee8118aa_id_rsa 2026-04-07 01:22:24.863451 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/d530b3fc0d474923ab0f01e3ee8118aa/work/d530b3fc0d474923ab0f01e3ee8118aa_id_rsa.pub 2026-04-07 01:22:24.863586 | orchestrator -> localhost | The key fingerprint is: 2026-04-07 01:22:24.863662 | orchestrator -> localhost | SHA256:HLKPCoUid3t/RG/c2TXL+zxhTLnFN1s1+5yo0JTZCXo zuul-build-sshkey 2026-04-07 01:22:24.863725 | orchestrator -> localhost | The key's randomart image is: 2026-04-07 01:22:24.863816 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-07 01:22:24.863916 | orchestrator -> localhost | | | 2026-04-07 01:22:24.863985 | orchestrator -> localhost | | . ..| 2026-04-07 01:22:24.864040 | orchestrator -> localhost | | . . . = ..=| 2026-04-07 01:22:24.864091 | orchestrator -> localhost | | . + o.E o **| 2026-04-07 01:22:24.864142 | orchestrator -> localhost | |o o o . S.+o .=+&| 2026-04-07 01:22:24.864231 | orchestrator -> localhost | |.o o . o ...+.oX+| 2026-04-07 01:22:24.864317 | orchestrator -> localhost | | . . o ..... . o| 2026-04-07 01:22:24.864371 | orchestrator -> localhost | | . o . .. + | 2026-04-07 01:22:24.864426 | orchestrator -> localhost | | . .. =| 2026-04-07 01:22:24.864478 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-07 01:22:24.864672 | orchestrator -> localhost | ok: Runtime: 0:00:01.033178 2026-04-07 01:22:24.881409 | 2026-04-07 01:22:24.881596 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-07 01:22:24.913649 | orchestrator | ok 2026-04-07 01:22:24.925034 | orchestrator | included: /var/lib/zuul/builds/d530b3fc0d474923ab0f01e3ee8118aa/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-07 01:22:24.936588 | 2026-04-07 01:22:24.936697 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-07 01:22:24.962612 | orchestrator | skipping: Conditional result was False 2026-04-07 01:22:24.970627 | 2026-04-07 01:22:24.970745 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-07 01:22:25.598958 | orchestrator | changed 2026-04-07 01:22:25.608068 | 2026-04-07 01:22:25.608201 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-07 01:22:25.913974 | orchestrator | ok 2026-04-07 01:22:25.920493 | 2026-04-07 01:22:25.920633 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-07 01:22:26.380728 | orchestrator | ok 2026-04-07 01:22:26.394444 | 2026-04-07 01:22:26.394716 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-07 01:22:26.876332 | orchestrator | ok 2026-04-07 01:22:26.885219 | 2026-04-07 01:22:26.885359 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-07 01:22:26.920009 | orchestrator | skipping: Conditional result was False 2026-04-07 01:22:26.934355 | 2026-04-07 01:22:26.934500 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-07 01:22:27.386690 | orchestrator -> localhost | changed 2026-04-07 01:22:27.410758 | 2026-04-07 01:22:27.410982 | TASK [add-build-sshkey : Add back temp key] 2026-04-07 01:22:27.771360 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/d530b3fc0d474923ab0f01e3ee8118aa/work/d530b3fc0d474923ab0f01e3ee8118aa_id_rsa (zuul-build-sshkey) 2026-04-07 01:22:27.772013 | orchestrator -> localhost | ok: Runtime: 0:00:00.020210 2026-04-07 01:22:27.788402 | 2026-04-07 01:22:27.788591 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-07 01:22:28.243028 | orchestrator | ok 2026-04-07 01:22:28.252327 | 2026-04-07 01:22:28.252459 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-07 01:22:28.286693 | orchestrator | skipping: Conditional result was False 2026-04-07 01:22:28.338948 | 2026-04-07 01:22:28.339083 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-07 01:22:28.763688 | orchestrator | ok 2026-04-07 01:22:28.787709 | 2026-04-07 01:22:28.787898 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-07 01:22:28.831345 | orchestrator | ok 2026-04-07 01:22:28.842428 | 2026-04-07 01:22:28.842635 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-07 01:22:29.150026 | orchestrator -> localhost | ok 2026-04-07 01:22:29.157870 | 2026-04-07 01:22:29.157980 | TASK [validate-host : Collect information about the host] 2026-04-07 01:22:30.436232 | orchestrator | ok 2026-04-07 01:22:30.454545 | 2026-04-07 01:22:30.454686 | TASK [validate-host : Sanitize hostname] 2026-04-07 01:22:30.528913 | orchestrator | ok 2026-04-07 01:22:30.537008 | 2026-04-07 01:22:30.537147 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-07 01:22:31.099054 | orchestrator -> localhost | changed 2026-04-07 01:22:31.113823 | 2026-04-07 01:22:31.113996 | TASK [validate-host : Collect information about zuul worker] 2026-04-07 01:22:31.580996 | orchestrator | ok 2026-04-07 01:22:31.591966 | 2026-04-07 01:22:31.592172 | TASK [validate-host : Write out all zuul information for each host] 2026-04-07 01:22:32.185088 | orchestrator -> localhost | changed 2026-04-07 01:22:32.196138 | 2026-04-07 01:22:32.196257 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-07 01:22:32.492726 | orchestrator | ok 2026-04-07 01:22:32.502342 | 2026-04-07 01:22:32.502471 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-07 01:22:56.414603 | orchestrator | changed: 2026-04-07 01:22:56.414894 | orchestrator | .d..t...... src/ 2026-04-07 01:22:56.414943 | orchestrator | .d..t...... src/github.com/ 2026-04-07 01:22:56.414975 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-07 01:22:56.415002 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-07 01:22:56.415028 | orchestrator | RedHat.yml 2026-04-07 01:22:56.431182 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-07 01:22:56.431204 | orchestrator | RedHat.yml 2026-04-07 01:22:56.431258 | orchestrator | = 2.2.0"... 2026-04-07 01:23:06.228290 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-07 01:23:06.245252 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-07 01:23:06.895587 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-07 01:23:07.692600 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-07 01:23:08.112448 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-07 01:23:08.722740 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-07 01:23:09.321679 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-07 01:23:10.230470 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-07 01:23:10.230536 | orchestrator | 2026-04-07 01:23:10.230543 | orchestrator | Providers are signed by their developers. 2026-04-07 01:23:10.230549 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-07 01:23:10.230554 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-07 01:23:10.230569 | orchestrator | 2026-04-07 01:23:10.230573 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-07 01:23:10.230591 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-07 01:23:10.230596 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-07 01:23:10.230600 | orchestrator | you run "tofu init" in the future. 2026-04-07 01:23:10.230876 | orchestrator | 2026-04-07 01:23:10.230885 | orchestrator | OpenTofu has been successfully initialized! 2026-04-07 01:23:10.230922 | orchestrator | 2026-04-07 01:23:10.230928 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-07 01:23:10.230932 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-07 01:23:10.230936 | orchestrator | should now work. 2026-04-07 01:23:10.230944 | orchestrator | 2026-04-07 01:23:10.230948 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-07 01:23:10.230952 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-07 01:23:10.230956 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-07 01:23:10.412725 | orchestrator | Created and switched to workspace "ci"! 2026-04-07 01:23:10.412790 | orchestrator | 2026-04-07 01:23:10.412797 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-07 01:23:10.412803 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-07 01:23:10.412809 | orchestrator | for this configuration. 2026-04-07 01:23:10.572325 | orchestrator | ci.auto.tfvars 2026-04-07 01:23:11.219490 | orchestrator | default_custom.tf 2026-04-07 01:23:13.517605 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-07 01:23:14.088893 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-07 01:23:14.310620 | orchestrator | 2026-04-07 01:23:14.310720 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-07 01:23:14.310735 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-07 01:23:14.310746 | orchestrator | + create 2026-04-07 01:23:14.310757 | orchestrator | <= read (data resources) 2026-04-07 01:23:14.310767 | orchestrator | 2026-04-07 01:23:14.310777 | orchestrator | OpenTofu will perform the following actions: 2026-04-07 01:23:14.310816 | orchestrator | 2026-04-07 01:23:14.310836 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-07 01:23:14.310847 | orchestrator | # (config refers to values not yet known) 2026-04-07 01:23:14.310857 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-07 01:23:14.310867 | orchestrator | + checksum = (known after apply) 2026-04-07 01:23:14.310877 | orchestrator | + created_at = (known after apply) 2026-04-07 01:23:14.310886 | orchestrator | + file = (known after apply) 2026-04-07 01:23:14.310896 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.310946 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.310957 | orchestrator | + min_disk_gb = (known after apply) 2026-04-07 01:23:14.310967 | orchestrator | + min_ram_mb = (known after apply) 2026-04-07 01:23:14.310978 | orchestrator | + most_recent = true 2026-04-07 01:23:14.310988 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.310997 | orchestrator | + protected = (known after apply) 2026-04-07 01:23:14.311007 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.311020 | orchestrator | + schema = (known after apply) 2026-04-07 01:23:14.311030 | orchestrator | + size_bytes = (known after apply) 2026-04-07 01:23:14.311039 | orchestrator | + tags = (known after apply) 2026-04-07 01:23:14.311049 | orchestrator | + updated_at = (known after apply) 2026-04-07 01:23:14.311059 | orchestrator | } 2026-04-07 01:23:14.311069 | orchestrator | 2026-04-07 01:23:14.311079 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-07 01:23:14.311088 | orchestrator | # (config refers to values not yet known) 2026-04-07 01:23:14.311098 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-07 01:23:14.311108 | orchestrator | + checksum = (known after apply) 2026-04-07 01:23:14.311118 | orchestrator | + created_at = (known after apply) 2026-04-07 01:23:14.311127 | orchestrator | + file = (known after apply) 2026-04-07 01:23:14.311137 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.311146 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.311156 | orchestrator | + min_disk_gb = (known after apply) 2026-04-07 01:23:14.311165 | orchestrator | + min_ram_mb = (known after apply) 2026-04-07 01:23:14.311175 | orchestrator | + most_recent = true 2026-04-07 01:23:14.311185 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.311194 | orchestrator | + protected = (known after apply) 2026-04-07 01:23:14.311204 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.311214 | orchestrator | + schema = (known after apply) 2026-04-07 01:23:14.311224 | orchestrator | + size_bytes = (known after apply) 2026-04-07 01:23:14.311233 | orchestrator | + tags = (known after apply) 2026-04-07 01:23:14.311255 | orchestrator | + updated_at = (known after apply) 2026-04-07 01:23:14.311265 | orchestrator | } 2026-04-07 01:23:14.311275 | orchestrator | 2026-04-07 01:23:14.311284 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-07 01:23:14.311294 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-07 01:23:14.311304 | orchestrator | + content = (known after apply) 2026-04-07 01:23:14.311314 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-07 01:23:14.311324 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-07 01:23:14.311333 | orchestrator | + content_md5 = (known after apply) 2026-04-07 01:23:14.311343 | orchestrator | + content_sha1 = (known after apply) 2026-04-07 01:23:14.311379 | orchestrator | + content_sha256 = (known after apply) 2026-04-07 01:23:14.311389 | orchestrator | + content_sha512 = (known after apply) 2026-04-07 01:23:14.311399 | orchestrator | + directory_permission = "0777" 2026-04-07 01:23:14.311408 | orchestrator | + file_permission = "0644" 2026-04-07 01:23:14.311418 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-07 01:23:14.311428 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.311437 | orchestrator | } 2026-04-07 01:23:14.311447 | orchestrator | 2026-04-07 01:23:14.311456 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-07 01:23:14.311466 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-07 01:23:14.311476 | orchestrator | + content = (known after apply) 2026-04-07 01:23:14.311485 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-07 01:23:14.311495 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-07 01:23:14.311504 | orchestrator | + content_md5 = (known after apply) 2026-04-07 01:23:14.311514 | orchestrator | + content_sha1 = (known after apply) 2026-04-07 01:23:14.311523 | orchestrator | + content_sha256 = (known after apply) 2026-04-07 01:23:14.311545 | orchestrator | + content_sha512 = (known after apply) 2026-04-07 01:23:14.311556 | orchestrator | + directory_permission = "0777" 2026-04-07 01:23:14.311566 | orchestrator | + file_permission = "0644" 2026-04-07 01:23:14.311583 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-07 01:23:14.311593 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.311603 | orchestrator | } 2026-04-07 01:23:14.311617 | orchestrator | 2026-04-07 01:23:14.311627 | orchestrator | # local_file.inventory will be created 2026-04-07 01:23:14.311637 | orchestrator | + resource "local_file" "inventory" { 2026-04-07 01:23:14.311646 | orchestrator | + content = (known after apply) 2026-04-07 01:23:14.311656 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-07 01:23:14.311665 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-07 01:23:14.311675 | orchestrator | + content_md5 = (known after apply) 2026-04-07 01:23:14.311684 | orchestrator | + content_sha1 = (known after apply) 2026-04-07 01:23:14.311695 | orchestrator | + content_sha256 = (known after apply) 2026-04-07 01:23:14.311704 | orchestrator | + content_sha512 = (known after apply) 2026-04-07 01:23:14.311714 | orchestrator | + directory_permission = "0777" 2026-04-07 01:23:14.311723 | orchestrator | + file_permission = "0644" 2026-04-07 01:23:14.311733 | orchestrator | + filename = "inventory.ci" 2026-04-07 01:23:14.311743 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.311752 | orchestrator | } 2026-04-07 01:23:14.311761 | orchestrator | 2026-04-07 01:23:14.311771 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-07 01:23:14.311781 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-07 01:23:14.311790 | orchestrator | + content = (sensitive value) 2026-04-07 01:23:14.311799 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-07 01:23:14.311809 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-07 01:23:14.311818 | orchestrator | + content_md5 = (known after apply) 2026-04-07 01:23:14.311828 | orchestrator | + content_sha1 = (known after apply) 2026-04-07 01:23:14.311837 | orchestrator | + content_sha256 = (known after apply) 2026-04-07 01:23:14.311847 | orchestrator | + content_sha512 = (known after apply) 2026-04-07 01:23:14.311856 | orchestrator | + directory_permission = "0700" 2026-04-07 01:23:14.311866 | orchestrator | + file_permission = "0600" 2026-04-07 01:23:14.311876 | orchestrator | + filename = ".id_rsa.ci" 2026-04-07 01:23:14.311885 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.311895 | orchestrator | } 2026-04-07 01:23:14.311904 | orchestrator | 2026-04-07 01:23:14.311914 | orchestrator | # null_resource.node_semaphore will be created 2026-04-07 01:23:14.311923 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-07 01:23:14.311933 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.311942 | orchestrator | } 2026-04-07 01:23:14.311952 | orchestrator | 2026-04-07 01:23:14.311961 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-07 01:23:14.311971 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-07 01:23:14.311980 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.311990 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.312000 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.312009 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.312019 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.312029 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-07 01:23:14.312038 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.312048 | orchestrator | + size = 80 2026-04-07 01:23:14.312057 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.312067 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.312076 | orchestrator | } 2026-04-07 01:23:14.312086 | orchestrator | 2026-04-07 01:23:14.312095 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-07 01:23:14.312105 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 01:23:14.312115 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.312124 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.312134 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.312149 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.312159 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.312169 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-07 01:23:14.312178 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.312188 | orchestrator | + size = 80 2026-04-07 01:23:14.312197 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.312207 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.312216 | orchestrator | } 2026-04-07 01:23:14.312226 | orchestrator | 2026-04-07 01:23:14.312236 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-07 01:23:14.312245 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 01:23:14.312255 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.312264 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.312274 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.312283 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.312293 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.312303 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-07 01:23:14.312312 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.312322 | orchestrator | + size = 80 2026-04-07 01:23:14.312331 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.312341 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.312421 | orchestrator | } 2026-04-07 01:23:14.316154 | orchestrator | 2026-04-07 01:23:14.316208 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-07 01:23:14.316217 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 01:23:14.316224 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.316232 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.316239 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.316246 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.316253 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.316259 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-07 01:23:14.316266 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.316272 | orchestrator | + size = 80 2026-04-07 01:23:14.316291 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.316299 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.316306 | orchestrator | } 2026-04-07 01:23:14.316319 | orchestrator | 2026-04-07 01:23:14.316326 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-07 01:23:14.316333 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 01:23:14.316340 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.316346 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.316375 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.316381 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.316388 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.316395 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-07 01:23:14.316401 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.316408 | orchestrator | + size = 80 2026-04-07 01:23:14.316414 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.316421 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.316428 | orchestrator | } 2026-04-07 01:23:14.316434 | orchestrator | 2026-04-07 01:23:14.316441 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-07 01:23:14.316447 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 01:23:14.316454 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.316461 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.316467 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.316487 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.316494 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.316500 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-07 01:23:14.316507 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.316514 | orchestrator | + size = 80 2026-04-07 01:23:14.316520 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.316527 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.316534 | orchestrator | } 2026-04-07 01:23:14.316543 | orchestrator | 2026-04-07 01:23:14.316550 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-07 01:23:14.316556 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-07 01:23:14.316563 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.316570 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.316577 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.316583 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.316590 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.316596 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-07 01:23:14.316603 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.316610 | orchestrator | + size = 80 2026-04-07 01:23:14.316616 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.316623 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.316629 | orchestrator | } 2026-04-07 01:23:14.316636 | orchestrator | 2026-04-07 01:23:14.316643 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-07 01:23:14.316651 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.316658 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.316664 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.316671 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.316677 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.316684 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-07 01:23:14.316691 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.316698 | orchestrator | + size = 20 2026-04-07 01:23:14.316704 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.316711 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.316718 | orchestrator | } 2026-04-07 01:23:14.316724 | orchestrator | 2026-04-07 01:23:14.316731 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-07 01:23:14.316738 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.316744 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.316751 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.316758 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.316764 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.316771 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-07 01:23:14.316778 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.316784 | orchestrator | + size = 20 2026-04-07 01:23:14.316791 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.316798 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.316804 | orchestrator | } 2026-04-07 01:23:14.316813 | orchestrator | 2026-04-07 01:23:14.316820 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-07 01:23:14.316827 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.316834 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.316840 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.316847 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.316854 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.316860 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-07 01:23:14.316867 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.316879 | orchestrator | + size = 20 2026-04-07 01:23:14.316886 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.316892 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.316899 | orchestrator | } 2026-04-07 01:23:14.316906 | orchestrator | 2026-04-07 01:23:14.316912 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-07 01:23:14.316919 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.316926 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.316932 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.316939 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.316949 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.316956 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-07 01:23:14.316963 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.316970 | orchestrator | + size = 20 2026-04-07 01:23:14.316976 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.316983 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.316990 | orchestrator | } 2026-04-07 01:23:14.316997 | orchestrator | 2026-04-07 01:23:14.317003 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-07 01:23:14.317010 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.317017 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.317023 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.317030 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.317037 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.317043 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-07 01:23:14.317050 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.317057 | orchestrator | + size = 20 2026-04-07 01:23:14.317064 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.317070 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.317077 | orchestrator | } 2026-04-07 01:23:14.317084 | orchestrator | 2026-04-07 01:23:14.317090 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-07 01:23:14.317097 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.317104 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.317110 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.317117 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.317124 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.317130 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-07 01:23:14.317137 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.317144 | orchestrator | + size = 20 2026-04-07 01:23:14.317150 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.317157 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.317164 | orchestrator | } 2026-04-07 01:23:14.317170 | orchestrator | 2026-04-07 01:23:14.317177 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-07 01:23:14.317184 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.317190 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.317197 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.317204 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.317210 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.317217 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-07 01:23:14.317224 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.317230 | orchestrator | + size = 20 2026-04-07 01:23:14.317237 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.317244 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.317250 | orchestrator | } 2026-04-07 01:23:14.317259 | orchestrator | 2026-04-07 01:23:14.317266 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-07 01:23:14.317273 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.317285 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.317291 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.317298 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.317305 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.317311 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-07 01:23:14.317318 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.317325 | orchestrator | + size = 20 2026-04-07 01:23:14.317331 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.317338 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.317345 | orchestrator | } 2026-04-07 01:23:14.317362 | orchestrator | 2026-04-07 01:23:14.317368 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-07 01:23:14.317375 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-07 01:23:14.317382 | orchestrator | + attachment = (known after apply) 2026-04-07 01:23:14.317388 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.317395 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.317402 | orchestrator | + metadata = (known after apply) 2026-04-07 01:23:14.317408 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-07 01:23:14.317415 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.317421 | orchestrator | + size = 20 2026-04-07 01:23:14.317428 | orchestrator | + volume_retype_policy = "never" 2026-04-07 01:23:14.317435 | orchestrator | + volume_type = "ssd" 2026-04-07 01:23:14.317441 | orchestrator | } 2026-04-07 01:23:14.317448 | orchestrator | 2026-04-07 01:23:14.317454 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-07 01:23:14.317461 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-07 01:23:14.317468 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 01:23:14.317474 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 01:23:14.317481 | orchestrator | + all_metadata = (known after apply) 2026-04-07 01:23:14.317488 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.317494 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.317501 | orchestrator | + config_drive = true 2026-04-07 01:23:14.317511 | orchestrator | + created = (known after apply) 2026-04-07 01:23:14.317518 | orchestrator | + flavor_id = (known after apply) 2026-04-07 01:23:14.317525 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-07 01:23:14.317531 | orchestrator | + force_delete = false 2026-04-07 01:23:14.317538 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 01:23:14.317544 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.317551 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.317558 | orchestrator | + image_name = (known after apply) 2026-04-07 01:23:14.317564 | orchestrator | + key_pair = "testbed" 2026-04-07 01:23:14.317571 | orchestrator | + name = "testbed-manager" 2026-04-07 01:23:14.317577 | orchestrator | + power_state = "active" 2026-04-07 01:23:14.317584 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.317591 | orchestrator | + security_groups = (known after apply) 2026-04-07 01:23:14.317597 | orchestrator | + stop_before_destroy = false 2026-04-07 01:23:14.317604 | orchestrator | + updated = (known after apply) 2026-04-07 01:23:14.317610 | orchestrator | + user_data = (sensitive value) 2026-04-07 01:23:14.317620 | orchestrator | 2026-04-07 01:23:14.317632 | orchestrator | + block_device { 2026-04-07 01:23:14.317643 | orchestrator | + boot_index = 0 2026-04-07 01:23:14.317653 | orchestrator | + delete_on_termination = false 2026-04-07 01:23:14.317663 | orchestrator | + destination_type = "volume" 2026-04-07 01:23:14.317672 | orchestrator | + multiattach = false 2026-04-07 01:23:14.317682 | orchestrator | + source_type = "volume" 2026-04-07 01:23:14.317692 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.317707 | orchestrator | } 2026-04-07 01:23:14.317717 | orchestrator | 2026-04-07 01:23:14.317727 | orchestrator | + network { 2026-04-07 01:23:14.317738 | orchestrator | + access_network = false 2026-04-07 01:23:14.317749 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 01:23:14.317758 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 01:23:14.317768 | orchestrator | + mac = (known after apply) 2026-04-07 01:23:14.317778 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.317788 | orchestrator | + port = (known after apply) 2026-04-07 01:23:14.317799 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.317809 | orchestrator | } 2026-04-07 01:23:14.317819 | orchestrator | } 2026-04-07 01:23:14.317835 | orchestrator | 2026-04-07 01:23:14.317846 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-07 01:23:14.317858 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 01:23:14.317869 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 01:23:14.317879 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 01:23:14.317891 | orchestrator | + all_metadata = (known after apply) 2026-04-07 01:23:14.317902 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.317913 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.317920 | orchestrator | + config_drive = true 2026-04-07 01:23:14.317927 | orchestrator | + created = (known after apply) 2026-04-07 01:23:14.317933 | orchestrator | + flavor_id = (known after apply) 2026-04-07 01:23:14.317940 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 01:23:14.317946 | orchestrator | + force_delete = false 2026-04-07 01:23:14.317953 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 01:23:14.317960 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.317966 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.317973 | orchestrator | + image_name = (known after apply) 2026-04-07 01:23:14.317980 | orchestrator | + key_pair = "testbed" 2026-04-07 01:23:14.317986 | orchestrator | + name = "testbed-node-0" 2026-04-07 01:23:14.317993 | orchestrator | + power_state = "active" 2026-04-07 01:23:14.317999 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.318006 | orchestrator | + security_groups = (known after apply) 2026-04-07 01:23:14.318041 | orchestrator | + stop_before_destroy = false 2026-04-07 01:23:14.318056 | orchestrator | + updated = (known after apply) 2026-04-07 01:23:14.318066 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 01:23:14.318077 | orchestrator | 2026-04-07 01:23:14.318087 | orchestrator | + block_device { 2026-04-07 01:23:14.318098 | orchestrator | + boot_index = 0 2026-04-07 01:23:14.318108 | orchestrator | + delete_on_termination = false 2026-04-07 01:23:14.318119 | orchestrator | + destination_type = "volume" 2026-04-07 01:23:14.318131 | orchestrator | + multiattach = false 2026-04-07 01:23:14.318144 | orchestrator | + source_type = "volume" 2026-04-07 01:23:14.318156 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.318168 | orchestrator | } 2026-04-07 01:23:14.318180 | orchestrator | 2026-04-07 01:23:14.318188 | orchestrator | + network { 2026-04-07 01:23:14.318195 | orchestrator | + access_network = false 2026-04-07 01:23:14.318201 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 01:23:14.318208 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 01:23:14.318215 | orchestrator | + mac = (known after apply) 2026-04-07 01:23:14.318221 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.318228 | orchestrator | + port = (known after apply) 2026-04-07 01:23:14.318235 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.318242 | orchestrator | } 2026-04-07 01:23:14.318248 | orchestrator | } 2026-04-07 01:23:14.318255 | orchestrator | 2026-04-07 01:23:14.318262 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-07 01:23:14.318268 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 01:23:14.318275 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 01:23:14.318290 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 01:23:14.318296 | orchestrator | + all_metadata = (known after apply) 2026-04-07 01:23:14.318303 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.318309 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.318316 | orchestrator | + config_drive = true 2026-04-07 01:23:14.318322 | orchestrator | + created = (known after apply) 2026-04-07 01:23:14.318329 | orchestrator | + flavor_id = (known after apply) 2026-04-07 01:23:14.318336 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 01:23:14.318342 | orchestrator | + force_delete = false 2026-04-07 01:23:14.318398 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 01:23:14.318407 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.318413 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.318420 | orchestrator | + image_name = (known after apply) 2026-04-07 01:23:14.318427 | orchestrator | + key_pair = "testbed" 2026-04-07 01:23:14.318434 | orchestrator | + name = "testbed-node-1" 2026-04-07 01:23:14.318440 | orchestrator | + power_state = "active" 2026-04-07 01:23:14.318447 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.318454 | orchestrator | + security_groups = (known after apply) 2026-04-07 01:23:14.318460 | orchestrator | + stop_before_destroy = false 2026-04-07 01:23:14.318467 | orchestrator | + updated = (known after apply) 2026-04-07 01:23:14.318479 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 01:23:14.318486 | orchestrator | 2026-04-07 01:23:14.318493 | orchestrator | + block_device { 2026-04-07 01:23:14.318500 | orchestrator | + boot_index = 0 2026-04-07 01:23:14.318506 | orchestrator | + delete_on_termination = false 2026-04-07 01:23:14.318513 | orchestrator | + destination_type = "volume" 2026-04-07 01:23:14.318519 | orchestrator | + multiattach = false 2026-04-07 01:23:14.318526 | orchestrator | + source_type = "volume" 2026-04-07 01:23:14.318532 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.318539 | orchestrator | } 2026-04-07 01:23:14.318546 | orchestrator | 2026-04-07 01:23:14.318552 | orchestrator | + network { 2026-04-07 01:23:14.318559 | orchestrator | + access_network = false 2026-04-07 01:23:14.318566 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 01:23:14.318572 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 01:23:14.318579 | orchestrator | + mac = (known after apply) 2026-04-07 01:23:14.318585 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.318592 | orchestrator | + port = (known after apply) 2026-04-07 01:23:14.318598 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.318605 | orchestrator | } 2026-04-07 01:23:14.318612 | orchestrator | } 2026-04-07 01:23:14.318625 | orchestrator | 2026-04-07 01:23:14.318631 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-07 01:23:14.318638 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 01:23:14.318645 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 01:23:14.318651 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 01:23:14.318659 | orchestrator | + all_metadata = (known after apply) 2026-04-07 01:23:14.318666 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.318672 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.318679 | orchestrator | + config_drive = true 2026-04-07 01:23:14.318686 | orchestrator | + created = (known after apply) 2026-04-07 01:23:14.318692 | orchestrator | + flavor_id = (known after apply) 2026-04-07 01:23:14.318699 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 01:23:14.318706 | orchestrator | + force_delete = false 2026-04-07 01:23:14.318712 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 01:23:14.318719 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.318726 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.318738 | orchestrator | + image_name = (known after apply) 2026-04-07 01:23:14.318745 | orchestrator | + key_pair = "testbed" 2026-04-07 01:23:14.318751 | orchestrator | + name = "testbed-node-2" 2026-04-07 01:23:14.318758 | orchestrator | + power_state = "active" 2026-04-07 01:23:14.318764 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.318771 | orchestrator | + security_groups = (known after apply) 2026-04-07 01:23:14.318777 | orchestrator | + stop_before_destroy = false 2026-04-07 01:23:14.318784 | orchestrator | + updated = (known after apply) 2026-04-07 01:23:14.318791 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 01:23:14.318797 | orchestrator | 2026-04-07 01:23:14.318804 | orchestrator | + block_device { 2026-04-07 01:23:14.318811 | orchestrator | + boot_index = 0 2026-04-07 01:23:14.318817 | orchestrator | + delete_on_termination = false 2026-04-07 01:23:14.318824 | orchestrator | + destination_type = "volume" 2026-04-07 01:23:14.318830 | orchestrator | + multiattach = false 2026-04-07 01:23:14.318837 | orchestrator | + source_type = "volume" 2026-04-07 01:23:14.318844 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.318850 | orchestrator | } 2026-04-07 01:23:14.318857 | orchestrator | 2026-04-07 01:23:14.318864 | orchestrator | + network { 2026-04-07 01:23:14.318870 | orchestrator | + access_network = false 2026-04-07 01:23:14.318877 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 01:23:14.318884 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 01:23:14.318890 | orchestrator | + mac = (known after apply) 2026-04-07 01:23:14.318897 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.318904 | orchestrator | + port = (known after apply) 2026-04-07 01:23:14.318910 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.318917 | orchestrator | } 2026-04-07 01:23:14.318923 | orchestrator | } 2026-04-07 01:23:14.318929 | orchestrator | 2026-04-07 01:23:14.318944 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-07 01:23:14.318950 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 01:23:14.318957 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 01:23:14.318963 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 01:23:14.318969 | orchestrator | + all_metadata = (known after apply) 2026-04-07 01:23:14.318975 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.318981 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.318987 | orchestrator | + config_drive = true 2026-04-07 01:23:14.318994 | orchestrator | + created = (known after apply) 2026-04-07 01:23:14.319000 | orchestrator | + flavor_id = (known after apply) 2026-04-07 01:23:14.319006 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 01:23:14.319012 | orchestrator | + force_delete = false 2026-04-07 01:23:14.319018 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 01:23:14.319024 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.319030 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.319036 | orchestrator | + image_name = (known after apply) 2026-04-07 01:23:14.319043 | orchestrator | + key_pair = "testbed" 2026-04-07 01:23:14.319049 | orchestrator | + name = "testbed-node-3" 2026-04-07 01:23:14.319055 | orchestrator | + power_state = "active" 2026-04-07 01:23:14.319061 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.319067 | orchestrator | + security_groups = (known after apply) 2026-04-07 01:23:14.319073 | orchestrator | + stop_before_destroy = false 2026-04-07 01:23:14.319079 | orchestrator | + updated = (known after apply) 2026-04-07 01:23:14.319086 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 01:23:14.319092 | orchestrator | 2026-04-07 01:23:14.319098 | orchestrator | + block_device { 2026-04-07 01:23:14.319104 | orchestrator | + boot_index = 0 2026-04-07 01:23:14.319110 | orchestrator | + delete_on_termination = false 2026-04-07 01:23:14.319116 | orchestrator | + destination_type = "volume" 2026-04-07 01:23:14.319127 | orchestrator | + multiattach = false 2026-04-07 01:23:14.319133 | orchestrator | + source_type = "volume" 2026-04-07 01:23:14.319139 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.319145 | orchestrator | } 2026-04-07 01:23:14.319152 | orchestrator | 2026-04-07 01:23:14.319158 | orchestrator | + network { 2026-04-07 01:23:14.319164 | orchestrator | + access_network = false 2026-04-07 01:23:14.319170 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 01:23:14.319176 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 01:23:14.319182 | orchestrator | + mac = (known after apply) 2026-04-07 01:23:14.319188 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.319195 | orchestrator | + port = (known after apply) 2026-04-07 01:23:14.319201 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.319207 | orchestrator | } 2026-04-07 01:23:14.319213 | orchestrator | } 2026-04-07 01:23:14.319219 | orchestrator | 2026-04-07 01:23:14.319225 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-07 01:23:14.319232 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 01:23:14.319238 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 01:23:14.319244 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 01:23:14.319250 | orchestrator | + all_metadata = (known after apply) 2026-04-07 01:23:14.319257 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.319263 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.319269 | orchestrator | + config_drive = true 2026-04-07 01:23:14.319280 | orchestrator | + created = (known after apply) 2026-04-07 01:23:14.319294 | orchestrator | + flavor_id = (known after apply) 2026-04-07 01:23:14.319300 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 01:23:14.319310 | orchestrator | + force_delete = false 2026-04-07 01:23:14.319321 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 01:23:14.319337 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.319368 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.319378 | orchestrator | + image_name = (known after apply) 2026-04-07 01:23:14.319387 | orchestrator | + key_pair = "testbed" 2026-04-07 01:23:14.319398 | orchestrator | + name = "testbed-node-4" 2026-04-07 01:23:14.319407 | orchestrator | + power_state = "active" 2026-04-07 01:23:14.319417 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.319426 | orchestrator | + security_groups = (known after apply) 2026-04-07 01:23:14.319436 | orchestrator | + stop_before_destroy = false 2026-04-07 01:23:14.319446 | orchestrator | + updated = (known after apply) 2026-04-07 01:23:14.319456 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 01:23:14.319467 | orchestrator | 2026-04-07 01:23:14.319478 | orchestrator | + block_device { 2026-04-07 01:23:14.319488 | orchestrator | + boot_index = 0 2026-04-07 01:23:14.319498 | orchestrator | + delete_on_termination = false 2026-04-07 01:23:14.319509 | orchestrator | + destination_type = "volume" 2026-04-07 01:23:14.319518 | orchestrator | + multiattach = false 2026-04-07 01:23:14.319524 | orchestrator | + source_type = "volume" 2026-04-07 01:23:14.319531 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.319537 | orchestrator | } 2026-04-07 01:23:14.319543 | orchestrator | 2026-04-07 01:23:14.319549 | orchestrator | + network { 2026-04-07 01:23:14.319555 | orchestrator | + access_network = false 2026-04-07 01:23:14.319561 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 01:23:14.319568 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 01:23:14.319574 | orchestrator | + mac = (known after apply) 2026-04-07 01:23:14.319580 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.319586 | orchestrator | + port = (known after apply) 2026-04-07 01:23:14.319592 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.319598 | orchestrator | } 2026-04-07 01:23:14.319604 | orchestrator | } 2026-04-07 01:23:14.319617 | orchestrator | 2026-04-07 01:23:14.319623 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-07 01:23:14.319630 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-07 01:23:14.319636 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-07 01:23:14.319642 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-07 01:23:14.319648 | orchestrator | + all_metadata = (known after apply) 2026-04-07 01:23:14.319654 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.319660 | orchestrator | + availability_zone = "nova" 2026-04-07 01:23:14.319666 | orchestrator | + config_drive = true 2026-04-07 01:23:14.319672 | orchestrator | + created = (known after apply) 2026-04-07 01:23:14.319678 | orchestrator | + flavor_id = (known after apply) 2026-04-07 01:23:14.319685 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-07 01:23:14.319691 | orchestrator | + force_delete = false 2026-04-07 01:23:14.319697 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-07 01:23:14.319703 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.319709 | orchestrator | + image_id = (known after apply) 2026-04-07 01:23:14.319715 | orchestrator | + image_name = (known after apply) 2026-04-07 01:23:14.319721 | orchestrator | + key_pair = "testbed" 2026-04-07 01:23:14.319727 | orchestrator | + name = "testbed-node-5" 2026-04-07 01:23:14.319733 | orchestrator | + power_state = "active" 2026-04-07 01:23:14.319739 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.319746 | orchestrator | + security_groups = (known after apply) 2026-04-07 01:23:14.319752 | orchestrator | + stop_before_destroy = false 2026-04-07 01:23:14.319758 | orchestrator | + updated = (known after apply) 2026-04-07 01:23:14.319764 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-07 01:23:14.319770 | orchestrator | 2026-04-07 01:23:14.319776 | orchestrator | + block_device { 2026-04-07 01:23:14.319782 | orchestrator | + boot_index = 0 2026-04-07 01:23:14.319789 | orchestrator | + delete_on_termination = false 2026-04-07 01:23:14.319795 | orchestrator | + destination_type = "volume" 2026-04-07 01:23:14.319801 | orchestrator | + multiattach = false 2026-04-07 01:23:14.319807 | orchestrator | + source_type = "volume" 2026-04-07 01:23:14.319813 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.319819 | orchestrator | } 2026-04-07 01:23:14.319825 | orchestrator | 2026-04-07 01:23:14.319831 | orchestrator | + network { 2026-04-07 01:23:14.319838 | orchestrator | + access_network = false 2026-04-07 01:23:14.319844 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-07 01:23:14.319850 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-07 01:23:14.319856 | orchestrator | + mac = (known after apply) 2026-04-07 01:23:14.319863 | orchestrator | + name = (known after apply) 2026-04-07 01:23:14.319869 | orchestrator | + port = (known after apply) 2026-04-07 01:23:14.319875 | orchestrator | + uuid = (known after apply) 2026-04-07 01:23:14.319881 | orchestrator | } 2026-04-07 01:23:14.319887 | orchestrator | } 2026-04-07 01:23:14.319893 | orchestrator | 2026-04-07 01:23:14.319900 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-07 01:23:14.319906 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-07 01:23:14.319912 | orchestrator | + fingerprint = (known after apply) 2026-04-07 01:23:14.319918 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.319924 | orchestrator | + name = "testbed" 2026-04-07 01:23:14.319930 | orchestrator | + private_key = (sensitive value) 2026-04-07 01:23:14.319936 | orchestrator | + public_key = (known after apply) 2026-04-07 01:23:14.319942 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.319948 | orchestrator | + user_id = (known after apply) 2026-04-07 01:23:14.319955 | orchestrator | } 2026-04-07 01:23:14.319961 | orchestrator | 2026-04-07 01:23:14.319967 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-07 01:23:14.319973 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.319984 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.319990 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.319996 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320003 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320013 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320020 | orchestrator | } 2026-04-07 01:23:14.320026 | orchestrator | 2026-04-07 01:23:14.320032 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-07 01:23:14.320045 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.320051 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.320058 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320064 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320070 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320076 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320082 | orchestrator | } 2026-04-07 01:23:14.320089 | orchestrator | 2026-04-07 01:23:14.320095 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-07 01:23:14.320101 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.320108 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.320114 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320120 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320126 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320132 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320138 | orchestrator | } 2026-04-07 01:23:14.320144 | orchestrator | 2026-04-07 01:23:14.320150 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-07 01:23:14.320156 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.320163 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.320169 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320175 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320181 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320187 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320193 | orchestrator | } 2026-04-07 01:23:14.320199 | orchestrator | 2026-04-07 01:23:14.320205 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-07 01:23:14.320212 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.320218 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.320224 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320230 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320236 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320242 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320248 | orchestrator | } 2026-04-07 01:23:14.320255 | orchestrator | 2026-04-07 01:23:14.320261 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-07 01:23:14.320267 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.320273 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.320279 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320285 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320291 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320297 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320304 | orchestrator | } 2026-04-07 01:23:14.320310 | orchestrator | 2026-04-07 01:23:14.320316 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-07 01:23:14.320322 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.320328 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.320334 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320341 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320374 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320392 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320402 | orchestrator | } 2026-04-07 01:23:14.320413 | orchestrator | 2026-04-07 01:23:14.320424 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-07 01:23:14.320434 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.320445 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.320455 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320464 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320470 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320477 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320483 | orchestrator | } 2026-04-07 01:23:14.320489 | orchestrator | 2026-04-07 01:23:14.320495 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-07 01:23:14.320502 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-07 01:23:14.320508 | orchestrator | + device = (known after apply) 2026-04-07 01:23:14.320514 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320520 | orchestrator | + instance_id = (known after apply) 2026-04-07 01:23:14.320526 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320532 | orchestrator | + volume_id = (known after apply) 2026-04-07 01:23:14.320538 | orchestrator | } 2026-04-07 01:23:14.320545 | orchestrator | 2026-04-07 01:23:14.320551 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-07 01:23:14.320558 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-07 01:23:14.320564 | orchestrator | + fixed_ip = (known after apply) 2026-04-07 01:23:14.320571 | orchestrator | + floating_ip = (known after apply) 2026-04-07 01:23:14.320577 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320583 | orchestrator | + port_id = (known after apply) 2026-04-07 01:23:14.320589 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320595 | orchestrator | } 2026-04-07 01:23:14.320601 | orchestrator | 2026-04-07 01:23:14.320607 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-07 01:23:14.320613 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-07 01:23:14.320619 | orchestrator | + address = (known after apply) 2026-04-07 01:23:14.320626 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.320636 | orchestrator | + dns_domain = (known after apply) 2026-04-07 01:23:14.320643 | orchestrator | + dns_name = (known after apply) 2026-04-07 01:23:14.320649 | orchestrator | + fixed_ip = (known after apply) 2026-04-07 01:23:14.320655 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320661 | orchestrator | + pool = "public" 2026-04-07 01:23:14.320667 | orchestrator | + port_id = (known after apply) 2026-04-07 01:23:14.320673 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320679 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.320686 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.320692 | orchestrator | } 2026-04-07 01:23:14.320698 | orchestrator | 2026-04-07 01:23:14.320704 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-07 01:23:14.320710 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-07 01:23:14.320716 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.320727 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.320734 | orchestrator | + availability_zone_hints = [ 2026-04-07 01:23:14.320740 | orchestrator | + "nova", 2026-04-07 01:23:14.320746 | orchestrator | ] 2026-04-07 01:23:14.320753 | orchestrator | + dns_domain = (known after apply) 2026-04-07 01:23:14.320759 | orchestrator | + external = (known after apply) 2026-04-07 01:23:14.320765 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320771 | orchestrator | + mtu = (known after apply) 2026-04-07 01:23:14.320777 | orchestrator | + name = "net-testbed-management" 2026-04-07 01:23:14.320783 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 01:23:14.320794 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 01:23:14.320801 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320807 | orchestrator | + shared = (known after apply) 2026-04-07 01:23:14.320813 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.320819 | orchestrator | + transparent_vlan = (known after apply) 2026-04-07 01:23:14.320825 | orchestrator | 2026-04-07 01:23:14.320832 | orchestrator | + segments (known after apply) 2026-04-07 01:23:14.320838 | orchestrator | } 2026-04-07 01:23:14.320844 | orchestrator | 2026-04-07 01:23:14.320850 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-07 01:23:14.320856 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-07 01:23:14.320862 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.320868 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 01:23:14.320875 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 01:23:14.320881 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.320887 | orchestrator | + device_id = (known after apply) 2026-04-07 01:23:14.320893 | orchestrator | + device_owner = (known after apply) 2026-04-07 01:23:14.320899 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 01:23:14.320905 | orchestrator | + dns_name = (known after apply) 2026-04-07 01:23:14.320911 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.320917 | orchestrator | + mac_address = (known after apply) 2026-04-07 01:23:14.320923 | orchestrator | + network_id = (known after apply) 2026-04-07 01:23:14.320929 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 01:23:14.320935 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 01:23:14.320941 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.320947 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 01:23:14.320953 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.320959 | orchestrator | 2026-04-07 01:23:14.320966 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.320972 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 01:23:14.320978 | orchestrator | } 2026-04-07 01:23:14.320984 | orchestrator | 2026-04-07 01:23:14.320990 | orchestrator | + binding (known after apply) 2026-04-07 01:23:14.320996 | orchestrator | 2026-04-07 01:23:14.321003 | orchestrator | + fixed_ip { 2026-04-07 01:23:14.321009 | orchestrator | + ip_address = "192.168.16.5" 2026-04-07 01:23:14.321015 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.321021 | orchestrator | } 2026-04-07 01:23:14.321027 | orchestrator | } 2026-04-07 01:23:14.321033 | orchestrator | 2026-04-07 01:23:14.321039 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-07 01:23:14.321046 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 01:23:14.321052 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.321058 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 01:23:14.321064 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 01:23:14.321070 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.321076 | orchestrator | + device_id = (known after apply) 2026-04-07 01:23:14.321082 | orchestrator | + device_owner = (known after apply) 2026-04-07 01:23:14.321088 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 01:23:14.321094 | orchestrator | + dns_name = (known after apply) 2026-04-07 01:23:14.321100 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.321106 | orchestrator | + mac_address = (known after apply) 2026-04-07 01:23:14.321112 | orchestrator | + network_id = (known after apply) 2026-04-07 01:23:14.321119 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 01:23:14.321125 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 01:23:14.321131 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.321141 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 01:23:14.321147 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.321154 | orchestrator | 2026-04-07 01:23:14.321160 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321166 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 01:23:14.321172 | orchestrator | } 2026-04-07 01:23:14.321178 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321184 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 01:23:14.321191 | orchestrator | } 2026-04-07 01:23:14.321197 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321203 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 01:23:14.321210 | orchestrator | } 2026-04-07 01:23:14.321216 | orchestrator | 2026-04-07 01:23:14.321222 | orchestrator | + binding (known after apply) 2026-04-07 01:23:14.321228 | orchestrator | 2026-04-07 01:23:14.321234 | orchestrator | + fixed_ip { 2026-04-07 01:23:14.321240 | orchestrator | + ip_address = "192.168.16.10" 2026-04-07 01:23:14.321246 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.321252 | orchestrator | } 2026-04-07 01:23:14.321258 | orchestrator | } 2026-04-07 01:23:14.321264 | orchestrator | 2026-04-07 01:23:14.321271 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-07 01:23:14.321277 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 01:23:14.321287 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.321293 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 01:23:14.321300 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 01:23:14.321306 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.321312 | orchestrator | + device_id = (known after apply) 2026-04-07 01:23:14.321318 | orchestrator | + device_owner = (known after apply) 2026-04-07 01:23:14.321324 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 01:23:14.321330 | orchestrator | + dns_name = (known after apply) 2026-04-07 01:23:14.321336 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.321342 | orchestrator | + mac_address = (known after apply) 2026-04-07 01:23:14.321375 | orchestrator | + network_id = (known after apply) 2026-04-07 01:23:14.321382 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 01:23:14.321388 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 01:23:14.321394 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.321400 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 01:23:14.321406 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.321413 | orchestrator | 2026-04-07 01:23:14.321419 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321425 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 01:23:14.321431 | orchestrator | } 2026-04-07 01:23:14.321437 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321443 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 01:23:14.321449 | orchestrator | } 2026-04-07 01:23:14.321456 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321462 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 01:23:14.321468 | orchestrator | } 2026-04-07 01:23:14.321474 | orchestrator | 2026-04-07 01:23:14.321480 | orchestrator | + binding (known after apply) 2026-04-07 01:23:14.321486 | orchestrator | 2026-04-07 01:23:14.321492 | orchestrator | + fixed_ip { 2026-04-07 01:23:14.321498 | orchestrator | + ip_address = "192.168.16.11" 2026-04-07 01:23:14.321504 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.321510 | orchestrator | } 2026-04-07 01:23:14.321516 | orchestrator | } 2026-04-07 01:23:14.321523 | orchestrator | 2026-04-07 01:23:14.321529 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-07 01:23:14.321535 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 01:23:14.321541 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.321547 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 01:23:14.321554 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 01:23:14.321560 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.321570 | orchestrator | + device_id = (known after apply) 2026-04-07 01:23:14.321576 | orchestrator | + device_owner = (known after apply) 2026-04-07 01:23:14.321582 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 01:23:14.321588 | orchestrator | + dns_name = (known after apply) 2026-04-07 01:23:14.321594 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.321600 | orchestrator | + mac_address = (known after apply) 2026-04-07 01:23:14.321607 | orchestrator | + network_id = (known after apply) 2026-04-07 01:23:14.321613 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 01:23:14.321619 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 01:23:14.321625 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.321631 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 01:23:14.321637 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.321643 | orchestrator | 2026-04-07 01:23:14.321649 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321655 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 01:23:14.321661 | orchestrator | } 2026-04-07 01:23:14.321667 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321674 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 01:23:14.321680 | orchestrator | } 2026-04-07 01:23:14.321686 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321692 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 01:23:14.321699 | orchestrator | } 2026-04-07 01:23:14.321705 | orchestrator | 2026-04-07 01:23:14.321711 | orchestrator | + binding (known after apply) 2026-04-07 01:23:14.321717 | orchestrator | 2026-04-07 01:23:14.321723 | orchestrator | + fixed_ip { 2026-04-07 01:23:14.321729 | orchestrator | + ip_address = "192.168.16.12" 2026-04-07 01:23:14.321735 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.321741 | orchestrator | } 2026-04-07 01:23:14.321747 | orchestrator | } 2026-04-07 01:23:14.321753 | orchestrator | 2026-04-07 01:23:14.321759 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-07 01:23:14.321765 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 01:23:14.321772 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.321778 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 01:23:14.321784 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 01:23:14.321790 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.321796 | orchestrator | + device_id = (known after apply) 2026-04-07 01:23:14.321802 | orchestrator | + device_owner = (known after apply) 2026-04-07 01:23:14.321809 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 01:23:14.321814 | orchestrator | + dns_name = (known after apply) 2026-04-07 01:23:14.321821 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.321827 | orchestrator | + mac_address = (known after apply) 2026-04-07 01:23:14.321833 | orchestrator | + network_id = (known after apply) 2026-04-07 01:23:14.321839 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 01:23:14.321845 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 01:23:14.321851 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.321857 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 01:23:14.321863 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.321869 | orchestrator | 2026-04-07 01:23:14.321876 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321882 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 01:23:14.321888 | orchestrator | } 2026-04-07 01:23:14.321894 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321900 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 01:23:14.321906 | orchestrator | } 2026-04-07 01:23:14.321912 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.321918 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 01:23:14.321925 | orchestrator | } 2026-04-07 01:23:14.321931 | orchestrator | 2026-04-07 01:23:14.321943 | orchestrator | + binding (known after apply) 2026-04-07 01:23:14.321949 | orchestrator | 2026-04-07 01:23:14.321955 | orchestrator | + fixed_ip { 2026-04-07 01:23:14.321961 | orchestrator | + ip_address = "192.168.16.13" 2026-04-07 01:23:14.321967 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.321973 | orchestrator | } 2026-04-07 01:23:14.321979 | orchestrator | } 2026-04-07 01:23:14.321985 | orchestrator | 2026-04-07 01:23:14.321991 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-07 01:23:14.321998 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 01:23:14.322004 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.322010 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 01:23:14.322045 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 01:23:14.322051 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.322057 | orchestrator | + device_id = (known after apply) 2026-04-07 01:23:14.322063 | orchestrator | + device_owner = (known after apply) 2026-04-07 01:23:14.322070 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 01:23:14.322080 | orchestrator | + dns_name = (known after apply) 2026-04-07 01:23:14.322091 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.322097 | orchestrator | + mac_address = (known after apply) 2026-04-07 01:23:14.322103 | orchestrator | + network_id = (known after apply) 2026-04-07 01:23:14.322109 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 01:23:14.322116 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 01:23:14.322122 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.322128 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 01:23:14.322134 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.322141 | orchestrator | 2026-04-07 01:23:14.322147 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.322157 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 01:23:14.322163 | orchestrator | } 2026-04-07 01:23:14.322169 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.322175 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 01:23:14.322182 | orchestrator | } 2026-04-07 01:23:14.322188 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.322194 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 01:23:14.322200 | orchestrator | } 2026-04-07 01:23:14.322206 | orchestrator | 2026-04-07 01:23:14.322212 | orchestrator | + binding (known after apply) 2026-04-07 01:23:14.322218 | orchestrator | 2026-04-07 01:23:14.322224 | orchestrator | + fixed_ip { 2026-04-07 01:23:14.322231 | orchestrator | + ip_address = "192.168.16.14" 2026-04-07 01:23:14.322237 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.322243 | orchestrator | } 2026-04-07 01:23:14.322249 | orchestrator | } 2026-04-07 01:23:14.322255 | orchestrator | 2026-04-07 01:23:14.322262 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-07 01:23:14.322268 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-07 01:23:14.322274 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.322280 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-07 01:23:14.322286 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-07 01:23:14.322292 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.322298 | orchestrator | + device_id = (known after apply) 2026-04-07 01:23:14.322304 | orchestrator | + device_owner = (known after apply) 2026-04-07 01:23:14.322311 | orchestrator | + dns_assignment = (known after apply) 2026-04-07 01:23:14.322317 | orchestrator | + dns_name = (known after apply) 2026-04-07 01:23:14.322323 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.322329 | orchestrator | + mac_address = (known after apply) 2026-04-07 01:23:14.322335 | orchestrator | + network_id = (known after apply) 2026-04-07 01:23:14.322346 | orchestrator | + port_security_enabled = (known after apply) 2026-04-07 01:23:14.322373 | orchestrator | + qos_policy_id = (known after apply) 2026-04-07 01:23:14.322392 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.322409 | orchestrator | + security_group_ids = (known after apply) 2026-04-07 01:23:14.322419 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.322429 | orchestrator | 2026-04-07 01:23:14.322440 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.322449 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-07 01:23:14.322460 | orchestrator | } 2026-04-07 01:23:14.322470 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.322481 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-07 01:23:14.322492 | orchestrator | } 2026-04-07 01:23:14.322503 | orchestrator | + allowed_address_pairs { 2026-04-07 01:23:14.322515 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-07 01:23:14.322527 | orchestrator | } 2026-04-07 01:23:14.322538 | orchestrator | 2026-04-07 01:23:14.322549 | orchestrator | + binding (known after apply) 2026-04-07 01:23:14.322562 | orchestrator | 2026-04-07 01:23:14.322573 | orchestrator | + fixed_ip { 2026-04-07 01:23:14.322584 | orchestrator | + ip_address = "192.168.16.15" 2026-04-07 01:23:14.322593 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.322599 | orchestrator | } 2026-04-07 01:23:14.322605 | orchestrator | } 2026-04-07 01:23:14.322611 | orchestrator | 2026-04-07 01:23:14.322617 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-07 01:23:14.322624 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-07 01:23:14.322630 | orchestrator | + force_destroy = false 2026-04-07 01:23:14.322636 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.322643 | orchestrator | + port_id = (known after apply) 2026-04-07 01:23:14.322649 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.322655 | orchestrator | + router_id = (known after apply) 2026-04-07 01:23:14.322661 | orchestrator | + subnet_id = (known after apply) 2026-04-07 01:23:14.322667 | orchestrator | } 2026-04-07 01:23:14.322673 | orchestrator | 2026-04-07 01:23:14.322679 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-07 01:23:14.322686 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-07 01:23:14.322692 | orchestrator | + admin_state_up = (known after apply) 2026-04-07 01:23:14.322698 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.322704 | orchestrator | + availability_zone_hints = [ 2026-04-07 01:23:14.322710 | orchestrator | + "nova", 2026-04-07 01:23:14.322716 | orchestrator | ] 2026-04-07 01:23:14.322722 | orchestrator | + distributed = (known after apply) 2026-04-07 01:23:14.322728 | orchestrator | + enable_snat = (known after apply) 2026-04-07 01:23:14.322734 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-07 01:23:14.322740 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-07 01:23:14.322746 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.322752 | orchestrator | + name = "testbed" 2026-04-07 01:23:14.322759 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.322765 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.322771 | orchestrator | 2026-04-07 01:23:14.322778 | orchestrator | + external_fixed_ip (known after apply) 2026-04-07 01:23:14.322784 | orchestrator | } 2026-04-07 01:23:14.322790 | orchestrator | 2026-04-07 01:23:14.322796 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-07 01:23:14.322803 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-07 01:23:14.322809 | orchestrator | + description = "ssh" 2026-04-07 01:23:14.322815 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.322821 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.322827 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.322833 | orchestrator | + port_range_max = 22 2026-04-07 01:23:14.322839 | orchestrator | + port_range_min = 22 2026-04-07 01:23:14.322845 | orchestrator | + protocol = "tcp" 2026-04-07 01:23:14.322851 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.322867 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.322874 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.322884 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 01:23:14.322891 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.322897 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.322903 | orchestrator | } 2026-04-07 01:23:14.322909 | orchestrator | 2026-04-07 01:23:14.322915 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-07 01:23:14.322922 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-07 01:23:14.322928 | orchestrator | + description = "wireguard" 2026-04-07 01:23:14.322934 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.322940 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.322946 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.322952 | orchestrator | + port_range_max = 51820 2026-04-07 01:23:14.322958 | orchestrator | + port_range_min = 51820 2026-04-07 01:23:14.322964 | orchestrator | + protocol = "udp" 2026-04-07 01:23:14.322970 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.322976 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.322982 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.322988 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 01:23:14.322994 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.323000 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323006 | orchestrator | } 2026-04-07 01:23:14.323012 | orchestrator | 2026-04-07 01:23:14.323018 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-07 01:23:14.323025 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-07 01:23:14.323036 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.323042 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.323048 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323054 | orchestrator | + protocol = "tcp" 2026-04-07 01:23:14.323060 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323066 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.323072 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.323079 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-07 01:23:14.323085 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.323091 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323097 | orchestrator | } 2026-04-07 01:23:14.323103 | orchestrator | 2026-04-07 01:23:14.323109 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-07 01:23:14.323115 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-07 01:23:14.323122 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.323128 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.323134 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323140 | orchestrator | + protocol = "udp" 2026-04-07 01:23:14.323149 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323160 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.323169 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.323179 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-07 01:23:14.323189 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.323199 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323208 | orchestrator | } 2026-04-07 01:23:14.323216 | orchestrator | 2026-04-07 01:23:14.323225 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-07 01:23:14.323242 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-07 01:23:14.323254 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.323265 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.323276 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323286 | orchestrator | + protocol = "icmp" 2026-04-07 01:23:14.323297 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323305 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.323311 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.323317 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 01:23:14.323323 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.323330 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323336 | orchestrator | } 2026-04-07 01:23:14.323342 | orchestrator | 2026-04-07 01:23:14.323382 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-07 01:23:14.323390 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-07 01:23:14.323396 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.323402 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.323408 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323414 | orchestrator | + protocol = "tcp" 2026-04-07 01:23:14.323421 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323427 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.323433 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.323439 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 01:23:14.323445 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.323451 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323457 | orchestrator | } 2026-04-07 01:23:14.323463 | orchestrator | 2026-04-07 01:23:14.323469 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-07 01:23:14.323475 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-07 01:23:14.323481 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.323487 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.323492 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323498 | orchestrator | + protocol = "udp" 2026-04-07 01:23:14.323503 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323514 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.323520 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.323525 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 01:23:14.323531 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.323536 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323541 | orchestrator | } 2026-04-07 01:23:14.323547 | orchestrator | 2026-04-07 01:23:14.323552 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-07 01:23:14.323557 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-07 01:23:14.323563 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.323568 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.323573 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323579 | orchestrator | + protocol = "icmp" 2026-04-07 01:23:14.323584 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323589 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.323594 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.323600 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 01:23:14.323605 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.323610 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323621 | orchestrator | } 2026-04-07 01:23:14.323627 | orchestrator | 2026-04-07 01:23:14.323632 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-07 01:23:14.323638 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-07 01:23:14.323643 | orchestrator | + description = "vrrp" 2026-04-07 01:23:14.323648 | orchestrator | + direction = "ingress" 2026-04-07 01:23:14.323654 | orchestrator | + ethertype = "IPv4" 2026-04-07 01:23:14.323659 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323664 | orchestrator | + protocol = "112" 2026-04-07 01:23:14.323670 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323675 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-07 01:23:14.323680 | orchestrator | + remote_group_id = (known after apply) 2026-04-07 01:23:14.323686 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-07 01:23:14.323691 | orchestrator | + security_group_id = (known after apply) 2026-04-07 01:23:14.323697 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323702 | orchestrator | } 2026-04-07 01:23:14.323708 | orchestrator | 2026-04-07 01:23:14.323713 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-07 01:23:14.323719 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-07 01:23:14.323724 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.323729 | orchestrator | + description = "management security group" 2026-04-07 01:23:14.323735 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323740 | orchestrator | + name = "testbed-management" 2026-04-07 01:23:14.323745 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323751 | orchestrator | + stateful = (known after apply) 2026-04-07 01:23:14.323756 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323761 | orchestrator | } 2026-04-07 01:23:14.323766 | orchestrator | 2026-04-07 01:23:14.323772 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-07 01:23:14.323777 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-07 01:23:14.323783 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.323788 | orchestrator | + description = "node security group" 2026-04-07 01:23:14.323793 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323799 | orchestrator | + name = "testbed-node" 2026-04-07 01:23:14.323804 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323809 | orchestrator | + stateful = (known after apply) 2026-04-07 01:23:14.323814 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323820 | orchestrator | } 2026-04-07 01:23:14.323825 | orchestrator | 2026-04-07 01:23:14.323831 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-07 01:23:14.323836 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-07 01:23:14.323841 | orchestrator | + all_tags = (known after apply) 2026-04-07 01:23:14.323847 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-07 01:23:14.323852 | orchestrator | + dns_nameservers = [ 2026-04-07 01:23:14.323858 | orchestrator | + "8.8.8.8", 2026-04-07 01:23:14.323863 | orchestrator | + "9.9.9.9", 2026-04-07 01:23:14.323868 | orchestrator | ] 2026-04-07 01:23:14.323874 | orchestrator | + enable_dhcp = true 2026-04-07 01:23:14.323879 | orchestrator | + gateway_ip = (known after apply) 2026-04-07 01:23:14.323889 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.323894 | orchestrator | + ip_version = 4 2026-04-07 01:23:14.323900 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-07 01:23:14.323905 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-07 01:23:14.323910 | orchestrator | + name = "subnet-testbed-management" 2026-04-07 01:23:14.323916 | orchestrator | + network_id = (known after apply) 2026-04-07 01:23:14.323921 | orchestrator | + no_gateway = false 2026-04-07 01:23:14.323927 | orchestrator | + region = (known after apply) 2026-04-07 01:23:14.323932 | orchestrator | + service_types = (known after apply) 2026-04-07 01:23:14.323944 | orchestrator | + tenant_id = (known after apply) 2026-04-07 01:23:14.323949 | orchestrator | 2026-04-07 01:23:14.323955 | orchestrator | + allocation_pool { 2026-04-07 01:23:14.323960 | orchestrator | + end = "192.168.31.250" 2026-04-07 01:23:14.323966 | orchestrator | + start = "192.168.31.200" 2026-04-07 01:23:14.323971 | orchestrator | } 2026-04-07 01:23:14.323977 | orchestrator | } 2026-04-07 01:23:14.323982 | orchestrator | 2026-04-07 01:23:14.323987 | orchestrator | # terraform_data.image will be created 2026-04-07 01:23:14.323993 | orchestrator | + resource "terraform_data" "image" { 2026-04-07 01:23:14.323998 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.324003 | orchestrator | + input = "Ubuntu 24.04" 2026-04-07 01:23:14.324009 | orchestrator | + output = (known after apply) 2026-04-07 01:23:14.324014 | orchestrator | } 2026-04-07 01:23:14.324019 | orchestrator | 2026-04-07 01:23:14.324025 | orchestrator | # terraform_data.image_node will be created 2026-04-07 01:23:14.324030 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-07 01:23:14.324035 | orchestrator | + id = (known after apply) 2026-04-07 01:23:14.324041 | orchestrator | + input = "Ubuntu 24.04" 2026-04-07 01:23:14.324046 | orchestrator | + output = (known after apply) 2026-04-07 01:23:14.324052 | orchestrator | } 2026-04-07 01:23:14.324057 | orchestrator | 2026-04-07 01:23:14.324062 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-07 01:23:14.324068 | orchestrator | 2026-04-07 01:23:14.324073 | orchestrator | Changes to Outputs: 2026-04-07 01:23:14.324078 | orchestrator | + manager_address = (sensitive value) 2026-04-07 01:23:14.324084 | orchestrator | + private_key = (sensitive value) 2026-04-07 01:23:14.569300 | orchestrator | terraform_data.image_node: Creating... 2026-04-07 01:23:14.569407 | orchestrator | terraform_data.image: Creating... 2026-04-07 01:23:14.569422 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=6433c6df-954b-fe61-9ae0-7f6cdd40952a] 2026-04-07 01:23:14.569433 | orchestrator | terraform_data.image: Creation complete after 0s [id=e1ea0ef4-08ab-32f5-fef9-925880e4f5fb] 2026-04-07 01:23:14.589644 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-07 01:23:14.594952 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-07 01:23:14.599262 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-07 01:23:14.600005 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-07 01:23:14.601573 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-07 01:23:14.601842 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-07 01:23:14.602406 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-07 01:23:14.603180 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-07 01:23:14.612766 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-07 01:23:14.613669 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-07 01:23:15.066450 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-07 01:23:15.075658 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-07 01:23:15.114643 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-07 01:23:15.125214 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-07 01:23:15.396371 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-07 01:23:15.401897 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-07 01:23:15.576308 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=31f8401d-36db-4eb4-97fb-927fea010441] 2026-04-07 01:23:15.587695 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-07 01:23:18.262235 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=cf020a49-c89e-40cb-ad7e-e7245d038c5c] 2026-04-07 01:23:18.274700 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=45504c97-465e-453c-b9da-4a892d5e284d] 2026-04-07 01:23:18.281269 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-07 01:23:18.282536 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7] 2026-04-07 01:23:18.287232 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=4ea74e91-c20c-41f1-919c-d0143e478dbc] 2026-04-07 01:23:18.296345 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=99243621-e146-4726-8289-3c034b504539] 2026-04-07 01:23:18.299559 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=bcabc37aeeca6e22f3c3a314139110165ea75ef6] 2026-04-07 01:23:18.300388 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=62e8e967-b9fa-4acb-b372-c409143b479f] 2026-04-07 01:23:18.300443 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-07 01:23:18.300701 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-07 01:23:18.303550 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-07 01:23:18.303821 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-07 01:23:18.305666 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=bc7c721557c8b90333e9fabe19c044b1b977b163] 2026-04-07 01:23:18.312601 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-07 01:23:18.312666 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-07 01:23:18.316900 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-07 01:23:18.350811 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=b27a0136-39f6-47a5-af08-be8e3f686599] 2026-04-07 01:23:18.359744 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=d0766011-b4d1-4704-bfcf-26d11fc4e2cc] 2026-04-07 01:23:18.362970 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-07 01:23:18.599246 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=7a8fe78b-90ad-4857-b477-d40f4ed756fc] 2026-04-07 01:23:18.941278 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=f80bc7fe-963f-46da-995b-2ace11698774] 2026-04-07 01:23:19.118828 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=44c6194b-3c5b-4d5d-b9d9-c6aa49017977] 2026-04-07 01:23:19.130539 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-07 01:23:21.697913 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=bfdec1fc-6534-4f16-a48b-f139f04a1945] 2026-04-07 01:23:21.706174 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=bb3b1ac7-71e4-4418-bc53-c930c1882772] 2026-04-07 01:23:21.711929 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=aca08a9c-83bc-497a-93bb-837b1de894dc] 2026-04-07 01:23:21.732404 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=36ff44a1-7c72-437b-8a26-984714c4230e] 2026-04-07 01:23:21.734810 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=cddfb89c-0910-445c-9577-7506a4630395] 2026-04-07 01:23:21.764272 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=2524aa84-ef66-48f2-a92a-bce47df89de2] 2026-04-07 01:23:22.040163 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=7f89dca8-4750-4aae-98fa-ee83ba730608] 2026-04-07 01:23:22.047518 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-07 01:23:22.047868 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-07 01:23:22.048674 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-07 01:23:22.216854 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=364c91bb-27fe-482d-a9f5-1b6232b24855] 2026-04-07 01:23:22.226736 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-07 01:23:22.228709 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-07 01:23:22.228796 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-07 01:23:22.231992 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-07 01:23:22.232063 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-07 01:23:22.235423 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-07 01:23:22.258906 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=35da4104-4e16-42e5-b17e-c7659d7f7d3b] 2026-04-07 01:23:22.268069 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-07 01:23:22.269645 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-07 01:23:22.270705 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-07 01:23:22.415225 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=d9d052bf-db7d-4be6-968f-79a99b68aa19] 2026-04-07 01:23:22.421988 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-07 01:23:22.443235 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=787d719a-be78-4d50-a189-08b61742d03c] 2026-04-07 01:23:22.458790 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-07 01:23:22.593224 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=f780c955-e4a9-4799-848f-a2f8f89463a4] 2026-04-07 01:23:22.601892 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-07 01:23:22.639111 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=90da72d0-be73-42cd-935e-a5deb6189714] 2026-04-07 01:23:22.654985 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-07 01:23:22.812197 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=2a34b003-7fdd-473a-8fdc-a8275139bca2] 2026-04-07 01:23:22.814839 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=cadc1ee4-c5a2-4e21-81d4-1ba3220b33e0] 2026-04-07 01:23:22.827729 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-07 01:23:22.828864 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-07 01:23:23.013657 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=141d6257-df99-4f19-ba79-febf0df315ec] 2026-04-07 01:23:23.026157 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-07 01:23:23.060595 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=44169bad-f700-4582-ba09-c8b15e8d3c0f] 2026-04-07 01:23:23.061655 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=2ecc77c0-11d9-4514-aa7e-5daf1a68f11f] 2026-04-07 01:23:23.261440 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=8aff9767-7f1d-4772-9617-8bc22f2fa0f6] 2026-04-07 01:23:23.456637 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=c8032b9c-581d-4396-8cbd-fca097adcd51] 2026-04-07 01:23:23.556983 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=b9b697c0-419e-49a6-b335-9ea013ad97a0] 2026-04-07 01:23:23.604429 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=29db1d6e-50d1-481d-87a7-2d8746850f87] 2026-04-07 01:23:23.625674 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=1a2cbbfb-f811-4329-a1f8-8fb019f8cbf8] 2026-04-07 01:23:23.919448 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=b16d1eab-f661-4eb7-9e8d-659188efe8fd] 2026-04-07 01:23:24.094610 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=d3fac3b3-3019-41bd-bd7e-b12911910e82] 2026-04-07 01:23:24.723089 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=f8cc1522-d967-47c4-bdb5-485b8bfb9cdb] 2026-04-07 01:23:24.741893 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-07 01:23:24.759809 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-07 01:23:24.766506 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-07 01:23:24.772234 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-07 01:23:24.775562 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-07 01:23:24.776948 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-07 01:23:24.777970 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-07 01:23:26.137567 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=0ed6616e-e4df-4147-914b-13d7001eab90] 2026-04-07 01:23:26.145188 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-07 01:23:26.153502 | orchestrator | local_file.inventory: Creating... 2026-04-07 01:23:26.154600 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-07 01:23:26.160579 | orchestrator | local_file.inventory: Creation complete after 0s [id=2574be06cee9310b96317b85c0da226e12c3a585] 2026-04-07 01:23:26.163538 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=7eccc307a6b165c87a819c4a6f052f096d6f6568] 2026-04-07 01:23:26.883715 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=0ed6616e-e4df-4147-914b-13d7001eab90] 2026-04-07 01:23:34.762672 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-07 01:23:34.768185 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-07 01:23:34.776496 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-07 01:23:34.776591 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-07 01:23:34.781824 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-07 01:23:34.781878 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-07 01:23:44.763878 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-07 01:23:44.769118 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-07 01:23:44.777605 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-07 01:23:44.777696 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-07 01:23:44.783013 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-07 01:23:44.783088 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-07 01:23:45.138582 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=935becaf-6468-430f-a259-a68e3a465d42] 2026-04-07 01:23:45.162108 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=a2ea49a6-8cc6-49f7-8fb3-1ee4ed96918a] 2026-04-07 01:23:45.165775 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=3f2fd9e2-cc37-4ba5-a424-c0e22acfafcb] 2026-04-07 01:23:54.778464 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-07 01:23:54.778569 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-07 01:23:54.783752 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-07 01:23:55.447808 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=2bd0e45c-f27f-4442-aff5-8d13a7d00afa] 2026-04-07 01:23:55.539697 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=258b0de1-3c1d-4ebe-96ac-f9d66dd1f0cd] 2026-04-07 01:23:55.565592 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=e3d98bf3-faba-447c-8d41-9f5fe20f4829] 2026-04-07 01:23:55.589228 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-07 01:23:55.598072 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5517298716398553296] 2026-04-07 01:23:55.606731 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-07 01:23:55.606827 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-07 01:23:55.607445 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-07 01:23:55.610267 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-07 01:23:55.611416 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-07 01:23:55.614807 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-07 01:23:55.614852 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-07 01:23:55.616474 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-07 01:23:55.625545 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-07 01:23:55.638849 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-07 01:23:58.979262 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=3f2fd9e2-cc37-4ba5-a424-c0e22acfafcb/d0766011-b4d1-4704-bfcf-26d11fc4e2cc] 2026-04-07 01:23:58.979548 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=a2ea49a6-8cc6-49f7-8fb3-1ee4ed96918a/4ea74e91-c20c-41f1-919c-d0143e478dbc] 2026-04-07 01:23:59.007506 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=a2ea49a6-8cc6-49f7-8fb3-1ee4ed96918a/62e8e967-b9fa-4acb-b372-c409143b479f] 2026-04-07 01:23:59.007605 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=935becaf-6468-430f-a259-a68e3a465d42/b27a0136-39f6-47a5-af08-be8e3f686599] 2026-04-07 01:23:59.037770 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=935becaf-6468-430f-a259-a68e3a465d42/45504c97-465e-453c-b9da-4a892d5e284d] 2026-04-07 01:23:59.046461 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=3f2fd9e2-cc37-4ba5-a424-c0e22acfafcb/99243621-e146-4726-8289-3c034b504539] 2026-04-07 01:24:05.109529 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=a2ea49a6-8cc6-49f7-8fb3-1ee4ed96918a/cf020a49-c89e-40cb-ad7e-e7245d038c5c] 2026-04-07 01:24:05.114996 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=3f2fd9e2-cc37-4ba5-a424-c0e22acfafcb/7a8fe78b-90ad-4857-b477-d40f4ed756fc] 2026-04-07 01:24:05.145042 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=935becaf-6468-430f-a259-a68e3a465d42/b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7] 2026-04-07 01:24:05.640374 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-07 01:24:15.641383 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-07 01:24:16.527765 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=a0c246a3-7abd-468f-8b47-e1b27f63139b] 2026-04-07 01:24:16.554420 | orchestrator | 2026-04-07 01:24:16.554498 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-07 01:24:16.554522 | orchestrator | 2026-04-07 01:24:16.554535 | orchestrator | Outputs: 2026-04-07 01:24:16.554542 | orchestrator | 2026-04-07 01:24:16.554549 | orchestrator | manager_address = 2026-04-07 01:24:16.554556 | orchestrator | private_key = 2026-04-07 01:24:16.851354 | orchestrator | ok: Runtime: 0:01:10.555931 2026-04-07 01:24:16.871941 | 2026-04-07 01:24:16.872059 | TASK [Fetch manager address] 2026-04-07 01:24:17.345932 | orchestrator | ok 2026-04-07 01:24:17.356703 | 2026-04-07 01:24:17.356839 | TASK [Set manager_host address] 2026-04-07 01:24:17.435952 | orchestrator | ok 2026-04-07 01:24:17.445038 | 2026-04-07 01:24:17.445163 | LOOP [Update ansible collections] 2026-04-07 01:24:18.675208 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-07 01:24:18.675480 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-07 01:24:18.675548 | orchestrator | Starting galaxy collection install process 2026-04-07 01:24:18.675590 | orchestrator | Process install dependency map 2026-04-07 01:24:18.675617 | orchestrator | Starting collection install process 2026-04-07 01:24:18.675641 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-04-07 01:24:18.675668 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-04-07 01:24:18.675699 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-07 01:24:18.675762 | orchestrator | ok: Item: commons Runtime: 0:00:00.895974 2026-04-07 01:24:19.674561 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-07 01:24:19.674727 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-07 01:24:19.674776 | orchestrator | Starting galaxy collection install process 2026-04-07 01:24:19.674813 | orchestrator | Process install dependency map 2026-04-07 01:24:19.674876 | orchestrator | Starting collection install process 2026-04-07 01:24:19.674912 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-04-07 01:24:19.674945 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-04-07 01:24:19.674977 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-07 01:24:19.675025 | orchestrator | ok: Item: services Runtime: 0:00:00.685764 2026-04-07 01:24:19.696401 | 2026-04-07 01:24:19.696570 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-07 01:24:30.323794 | orchestrator | ok 2026-04-07 01:24:30.331462 | 2026-04-07 01:24:30.331610 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-07 01:25:30.367562 | orchestrator | ok 2026-04-07 01:25:30.377741 | 2026-04-07 01:25:30.377863 | TASK [Fetch manager ssh hostkey] 2026-04-07 01:25:31.949438 | orchestrator | Output suppressed because no_log was given 2026-04-07 01:25:31.964657 | 2026-04-07 01:25:31.964829 | TASK [Get ssh keypair from terraform environment] 2026-04-07 01:25:32.501066 | orchestrator | ok: Runtime: 0:00:00.012171 2026-04-07 01:25:32.508971 | 2026-04-07 01:25:32.509096 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-07 01:25:32.558097 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-07 01:25:32.568432 | 2026-04-07 01:25:32.568583 | TASK [Run manager part 0] 2026-04-07 01:25:33.466263 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-07 01:25:33.518702 | orchestrator | 2026-04-07 01:25:33.518749 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-07 01:25:33.518756 | orchestrator | 2026-04-07 01:25:33.518769 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-07 01:25:35.810828 | orchestrator | ok: [testbed-manager] 2026-04-07 01:25:35.810878 | orchestrator | 2026-04-07 01:25:35.811001 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-07 01:25:35.811013 | orchestrator | 2026-04-07 01:25:35.811023 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:25:37.911943 | orchestrator | ok: [testbed-manager] 2026-04-07 01:25:37.912126 | orchestrator | 2026-04-07 01:25:37.912147 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-07 01:25:38.677367 | orchestrator | ok: [testbed-manager] 2026-04-07 01:25:38.677475 | orchestrator | 2026-04-07 01:25:38.677491 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-07 01:25:38.730476 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:25:38.730536 | orchestrator | 2026-04-07 01:25:38.730546 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-07 01:25:38.768003 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:25:38.768073 | orchestrator | 2026-04-07 01:25:38.768082 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-07 01:25:38.805048 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:25:38.805120 | orchestrator | 2026-04-07 01:25:38.805130 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-07 01:25:39.669965 | orchestrator | changed: [testbed-manager] 2026-04-07 01:25:39.670090 | orchestrator | 2026-04-07 01:25:39.670107 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-07 01:28:56.153785 | orchestrator | changed: [testbed-manager] 2026-04-07 01:28:56.153861 | orchestrator | 2026-04-07 01:28:56.153873 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-07 01:30:20.771068 | orchestrator | changed: [testbed-manager] 2026-04-07 01:30:20.771195 | orchestrator | 2026-04-07 01:30:20.771227 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-07 01:30:47.979488 | orchestrator | changed: [testbed-manager] 2026-04-07 01:30:47.979608 | orchestrator | 2026-04-07 01:30:47.979628 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-07 01:30:58.435812 | orchestrator | changed: [testbed-manager] 2026-04-07 01:30:58.435895 | orchestrator | 2026-04-07 01:30:58.435907 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-07 01:30:58.486727 | orchestrator | ok: [testbed-manager] 2026-04-07 01:30:58.486816 | orchestrator | 2026-04-07 01:30:58.486833 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-07 01:30:59.402452 | orchestrator | ok: [testbed-manager] 2026-04-07 01:30:59.402597 | orchestrator | 2026-04-07 01:30:59.402614 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-07 01:31:00.212471 | orchestrator | changed: [testbed-manager] 2026-04-07 01:31:00.212582 | orchestrator | 2026-04-07 01:31:00.212601 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-07 01:31:07.161014 | orchestrator | changed: [testbed-manager] 2026-04-07 01:31:07.161110 | orchestrator | 2026-04-07 01:31:07.161128 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-07 01:31:13.767821 | orchestrator | changed: [testbed-manager] 2026-04-07 01:31:13.767895 | orchestrator | 2026-04-07 01:31:13.767905 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-07 01:31:16.658841 | orchestrator | changed: [testbed-manager] 2026-04-07 01:31:16.658889 | orchestrator | 2026-04-07 01:31:16.658898 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-07 01:31:18.634491 | orchestrator | changed: [testbed-manager] 2026-04-07 01:31:18.634586 | orchestrator | 2026-04-07 01:31:18.634595 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-07 01:31:19.866405 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-07 01:31:19.866464 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-07 01:31:19.866472 | orchestrator | 2026-04-07 01:31:19.866481 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-07 01:31:19.912013 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-07 01:31:19.912071 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-07 01:31:19.912078 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-07 01:31:19.912085 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-07 01:31:23.627615 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-07 01:31:23.627732 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-07 01:31:23.627756 | orchestrator | 2026-04-07 01:31:23.627776 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-07 01:31:24.257307 | orchestrator | changed: [testbed-manager] 2026-04-07 01:31:24.257397 | orchestrator | 2026-04-07 01:31:24.257414 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-07 01:32:45.245725 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-07 01:32:45.245779 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-07 01:32:45.245788 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-07 01:32:45.245796 | orchestrator | 2026-04-07 01:32:45.245803 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-07 01:32:47.754438 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-07 01:32:47.754498 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-07 01:32:47.754509 | orchestrator | 2026-04-07 01:32:47.754520 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-07 01:32:47.754559 | orchestrator | 2026-04-07 01:32:47.754568 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:32:49.271370 | orchestrator | ok: [testbed-manager] 2026-04-07 01:32:49.271432 | orchestrator | 2026-04-07 01:32:49.271445 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-07 01:32:49.317355 | orchestrator | ok: [testbed-manager] 2026-04-07 01:32:49.317424 | orchestrator | 2026-04-07 01:32:49.317434 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-07 01:32:49.375917 | orchestrator | ok: [testbed-manager] 2026-04-07 01:32:49.375963 | orchestrator | 2026-04-07 01:32:49.375972 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-07 01:32:50.249077 | orchestrator | changed: [testbed-manager] 2026-04-07 01:32:50.250117 | orchestrator | 2026-04-07 01:32:50.250179 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-07 01:32:50.993473 | orchestrator | changed: [testbed-manager] 2026-04-07 01:32:50.993521 | orchestrator | 2026-04-07 01:32:50.993527 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-07 01:32:52.403392 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-07 01:32:52.403459 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-07 01:32:52.403470 | orchestrator | 2026-04-07 01:32:52.403480 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-07 01:32:53.904196 | orchestrator | changed: [testbed-manager] 2026-04-07 01:32:53.904390 | orchestrator | 2026-04-07 01:32:53.904415 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-07 01:32:55.802488 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 01:32:55.802561 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-07 01:32:55.802580 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-07 01:32:55.802587 | orchestrator | 2026-04-07 01:32:55.802595 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-07 01:32:55.851102 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:32:55.851151 | orchestrator | 2026-04-07 01:32:55.851162 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-07 01:32:55.924135 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:32:55.924174 | orchestrator | 2026-04-07 01:32:55.924180 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-07 01:32:56.515419 | orchestrator | changed: [testbed-manager] 2026-04-07 01:32:56.515457 | orchestrator | 2026-04-07 01:32:56.515464 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-07 01:32:56.618363 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:32:56.618397 | orchestrator | 2026-04-07 01:32:56.618402 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-07 01:32:57.533143 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 01:32:57.533192 | orchestrator | changed: [testbed-manager] 2026-04-07 01:32:57.533200 | orchestrator | 2026-04-07 01:32:57.533206 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-07 01:32:57.574477 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:32:57.574519 | orchestrator | 2026-04-07 01:32:57.574527 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-07 01:32:57.613669 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:32:57.613711 | orchestrator | 2026-04-07 01:32:57.613719 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-07 01:32:57.654334 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:32:57.654375 | orchestrator | 2026-04-07 01:32:57.654383 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-07 01:32:57.738127 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:32:57.738168 | orchestrator | 2026-04-07 01:32:57.738176 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-07 01:32:58.511471 | orchestrator | ok: [testbed-manager] 2026-04-07 01:32:58.511599 | orchestrator | 2026-04-07 01:32:58.511618 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-07 01:32:58.511631 | orchestrator | 2026-04-07 01:32:58.511644 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:33:00.004999 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:00.005087 | orchestrator | 2026-04-07 01:33:00.005101 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-07 01:33:01.052008 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:01.052049 | orchestrator | 2026-04-07 01:33:01.052056 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:33:01.052062 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-07 01:33:01.052067 | orchestrator | 2026-04-07 01:33:01.379629 | orchestrator | ok: Runtime: 0:07:28.302298 2026-04-07 01:33:01.396954 | 2026-04-07 01:33:01.397097 | TASK [Point out that the log in on the manager is now possible] 2026-04-07 01:33:01.446435 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-07 01:33:01.456696 | 2026-04-07 01:33:01.456821 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-07 01:33:01.495682 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-07 01:33:01.505875 | 2026-04-07 01:33:01.506004 | TASK [Run manager part 1 + 2] 2026-04-07 01:33:02.398593 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-07 01:33:02.462669 | orchestrator | 2026-04-07 01:33:02.462736 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-07 01:33:02.462746 | orchestrator | 2026-04-07 01:33:02.462764 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:33:05.671278 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:05.671343 | orchestrator | 2026-04-07 01:33:05.671373 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-07 01:33:05.712735 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:33:05.712793 | orchestrator | 2026-04-07 01:33:05.712804 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-07 01:33:05.755319 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:05.755366 | orchestrator | 2026-04-07 01:33:05.755374 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-07 01:33:05.797780 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:05.797843 | orchestrator | 2026-04-07 01:33:05.797854 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-07 01:33:05.883070 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:05.883132 | orchestrator | 2026-04-07 01:33:05.883144 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-07 01:33:05.950930 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:05.950997 | orchestrator | 2026-04-07 01:33:05.951011 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-07 01:33:06.008146 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-07 01:33:06.008206 | orchestrator | 2026-04-07 01:33:06.008215 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-07 01:33:06.802847 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:06.802911 | orchestrator | 2026-04-07 01:33:06.802924 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-07 01:33:06.841610 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:33:06.841655 | orchestrator | 2026-04-07 01:33:06.841661 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-07 01:33:08.321910 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:08.321985 | orchestrator | 2026-04-07 01:33:08.322134 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-07 01:33:08.947189 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:08.947294 | orchestrator | 2026-04-07 01:33:08.947310 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-07 01:33:10.127666 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:10.127834 | orchestrator | 2026-04-07 01:33:10.127853 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-07 01:33:27.335062 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:27.335162 | orchestrator | 2026-04-07 01:33:27.335178 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-07 01:33:28.086090 | orchestrator | ok: [testbed-manager] 2026-04-07 01:33:28.086182 | orchestrator | 2026-04-07 01:33:28.086198 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-07 01:33:28.171602 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:33:28.171659 | orchestrator | 2026-04-07 01:33:28.171666 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-07 01:33:29.162497 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:29.162626 | orchestrator | 2026-04-07 01:33:29.162646 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-07 01:33:30.154215 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:30.154276 | orchestrator | 2026-04-07 01:33:30.154285 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-07 01:33:30.690639 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:30.690686 | orchestrator | 2026-04-07 01:33:30.690692 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-07 01:33:30.733312 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-07 01:33:30.733420 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-07 01:33:30.733433 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-07 01:33:30.733443 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-07 01:33:32.815431 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:32.815571 | orchestrator | 2026-04-07 01:33:32.815593 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-07 01:33:42.455607 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-07 01:33:42.455698 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-07 01:33:42.455712 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-07 01:33:42.455723 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-07 01:33:42.455740 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-07 01:33:42.455749 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-07 01:33:42.455758 | orchestrator | 2026-04-07 01:33:42.455768 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-07 01:33:43.616379 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:43.616486 | orchestrator | 2026-04-07 01:33:43.616503 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-07 01:33:46.960835 | orchestrator | changed: [testbed-manager] 2026-04-07 01:33:46.960875 | orchestrator | 2026-04-07 01:33:46.960882 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-07 01:33:47.001958 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:33:47.001998 | orchestrator | 2026-04-07 01:33:47.002006 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-07 01:35:45.896663 | orchestrator | changed: [testbed-manager] 2026-04-07 01:35:45.896777 | orchestrator | 2026-04-07 01:35:45.896798 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-07 01:35:47.208518 | orchestrator | ok: [testbed-manager] 2026-04-07 01:35:47.208556 | orchestrator | 2026-04-07 01:35:47.208562 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:35:47.208584 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-07 01:35:47.208588 | orchestrator | 2026-04-07 01:35:47.651245 | orchestrator | ok: Runtime: 0:02:45.506245 2026-04-07 01:35:47.668752 | 2026-04-07 01:35:47.668899 | TASK [Reboot manager] 2026-04-07 01:35:49.205071 | orchestrator | ok: Runtime: 0:00:01.004941 2026-04-07 01:35:49.222138 | 2026-04-07 01:35:49.222325 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-07 01:36:05.884771 | orchestrator | ok 2026-04-07 01:36:05.895542 | 2026-04-07 01:36:05.895675 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-07 01:37:05.940119 | orchestrator | ok 2026-04-07 01:37:05.950701 | 2026-04-07 01:37:05.950872 | TASK [Deploy manager + bootstrap nodes] 2026-04-07 01:37:08.973345 | orchestrator | 2026-04-07 01:37:08.973575 | orchestrator | # DEPLOY MANAGER 2026-04-07 01:37:08.973604 | orchestrator | 2026-04-07 01:37:08.973618 | orchestrator | + set -e 2026-04-07 01:37:08.973631 | orchestrator | + echo 2026-04-07 01:37:08.973644 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-07 01:37:08.973660 | orchestrator | + echo 2026-04-07 01:37:08.973706 | orchestrator | + cat /opt/manager-vars.sh 2026-04-07 01:37:08.978856 | orchestrator | export NUMBER_OF_NODES=6 2026-04-07 01:37:08.978953 | orchestrator | 2026-04-07 01:37:08.978971 | orchestrator | export CEPH_VERSION=reef 2026-04-07 01:37:08.978986 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-07 01:37:08.979001 | orchestrator | export MANAGER_VERSION=9.5.0 2026-04-07 01:37:08.979034 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-07 01:37:08.979048 | orchestrator | 2026-04-07 01:37:08.979071 | orchestrator | export ARA=false 2026-04-07 01:37:08.979086 | orchestrator | export DEPLOY_MODE=manager 2026-04-07 01:37:08.979106 | orchestrator | export TEMPEST=false 2026-04-07 01:37:08.979122 | orchestrator | export IS_ZUUL=true 2026-04-07 01:37:08.979137 | orchestrator | 2026-04-07 01:37:08.979157 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 01:37:08.979172 | orchestrator | export EXTERNAL_API=false 2026-04-07 01:37:08.979186 | orchestrator | 2026-04-07 01:37:08.979200 | orchestrator | export IMAGE_USER=ubuntu 2026-04-07 01:37:08.979219 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-07 01:37:08.979233 | orchestrator | 2026-04-07 01:37:08.979247 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-07 01:37:08.979275 | orchestrator | 2026-04-07 01:37:08.979289 | orchestrator | + echo 2026-04-07 01:37:08.979304 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 01:37:08.980294 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 01:37:08.980329 | orchestrator | ++ INTERACTIVE=false 2026-04-07 01:37:08.980339 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 01:37:08.980349 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 01:37:08.980722 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 01:37:08.980737 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 01:37:08.980746 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 01:37:08.980755 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 01:37:08.980763 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 01:37:08.980771 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 01:37:08.980781 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 01:37:08.980789 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 01:37:08.980797 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 01:37:08.980805 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 01:37:08.980901 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 01:37:08.980917 | orchestrator | ++ export ARA=false 2026-04-07 01:37:08.980930 | orchestrator | ++ ARA=false 2026-04-07 01:37:08.980944 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 01:37:08.980956 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 01:37:08.980974 | orchestrator | ++ export TEMPEST=false 2026-04-07 01:37:08.980988 | orchestrator | ++ TEMPEST=false 2026-04-07 01:37:08.981039 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 01:37:08.981054 | orchestrator | ++ IS_ZUUL=true 2026-04-07 01:37:08.981068 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 01:37:08.981088 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 01:37:08.981103 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 01:37:08.981117 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 01:37:08.981155 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 01:37:08.981215 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 01:37:08.981225 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 01:37:08.981234 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 01:37:08.981242 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 01:37:08.981251 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 01:37:08.981710 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-07 01:37:09.038121 | orchestrator | + docker version 2026-04-07 01:37:09.150469 | orchestrator | Client: Docker Engine - Community 2026-04-07 01:37:09.150606 | orchestrator | Version: 27.5.1 2026-04-07 01:37:09.150624 | orchestrator | API version: 1.47 2026-04-07 01:37:09.150634 | orchestrator | Go version: go1.22.11 2026-04-07 01:37:09.150644 | orchestrator | Git commit: 9f9e405 2026-04-07 01:37:09.150653 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-07 01:37:09.150663 | orchestrator | OS/Arch: linux/amd64 2026-04-07 01:37:09.150673 | orchestrator | Context: default 2026-04-07 01:37:09.150682 | orchestrator | 2026-04-07 01:37:09.150691 | orchestrator | Server: Docker Engine - Community 2026-04-07 01:37:09.150701 | orchestrator | Engine: 2026-04-07 01:37:09.150711 | orchestrator | Version: 27.5.1 2026-04-07 01:37:09.150720 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-07 01:37:09.150756 | orchestrator | Go version: go1.22.11 2026-04-07 01:37:09.150766 | orchestrator | Git commit: 4c9b3b0 2026-04-07 01:37:09.150775 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-07 01:37:09.150785 | orchestrator | OS/Arch: linux/amd64 2026-04-07 01:37:09.150794 | orchestrator | Experimental: false 2026-04-07 01:37:09.150803 | orchestrator | containerd: 2026-04-07 01:37:09.150812 | orchestrator | Version: v2.2.2 2026-04-07 01:37:09.150822 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-07 01:37:09.150831 | orchestrator | runc: 2026-04-07 01:37:09.150840 | orchestrator | Version: 1.3.4 2026-04-07 01:37:09.150850 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-07 01:37:09.150859 | orchestrator | docker-init: 2026-04-07 01:37:09.150868 | orchestrator | Version: 0.19.0 2026-04-07 01:37:09.150878 | orchestrator | GitCommit: de40ad0 2026-04-07 01:37:09.154217 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-07 01:37:09.164366 | orchestrator | + set -e 2026-04-07 01:37:09.164470 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 01:37:09.164487 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 01:37:09.164499 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 01:37:09.164511 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 01:37:09.164522 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 01:37:09.164534 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 01:37:09.164592 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 01:37:09.164604 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 01:37:09.164616 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 01:37:09.164628 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 01:37:09.164639 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 01:37:09.164651 | orchestrator | ++ export ARA=false 2026-04-07 01:37:09.164663 | orchestrator | ++ ARA=false 2026-04-07 01:37:09.164674 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 01:37:09.164686 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 01:37:09.164697 | orchestrator | ++ export TEMPEST=false 2026-04-07 01:37:09.164709 | orchestrator | ++ TEMPEST=false 2026-04-07 01:37:09.164720 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 01:37:09.164731 | orchestrator | ++ IS_ZUUL=true 2026-04-07 01:37:09.164742 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 01:37:09.164757 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 01:37:09.164776 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 01:37:09.164808 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 01:37:09.164826 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 01:37:09.164843 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 01:37:09.164861 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 01:37:09.164910 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 01:37:09.164930 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 01:37:09.164949 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 01:37:09.164969 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 01:37:09.164987 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 01:37:09.165005 | orchestrator | ++ INTERACTIVE=false 2026-04-07 01:37:09.165023 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 01:37:09.165041 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 01:37:09.165067 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-07 01:37:09.165079 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-04-07 01:37:09.172067 | orchestrator | + set -e 2026-04-07 01:37:09.172158 | orchestrator | + VERSION=9.5.0 2026-04-07 01:37:09.172190 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-07 01:37:09.183908 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-07 01:37:09.184003 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-07 01:37:09.189333 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-07 01:37:09.194291 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-07 01:37:09.205115 | orchestrator | /opt/configuration ~ 2026-04-07 01:37:09.205201 | orchestrator | + set -e 2026-04-07 01:37:09.205215 | orchestrator | + pushd /opt/configuration 2026-04-07 01:37:09.205227 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 01:37:09.206584 | orchestrator | + source /opt/venv/bin/activate 2026-04-07 01:37:09.208392 | orchestrator | ++ deactivate nondestructive 2026-04-07 01:37:09.208444 | orchestrator | ++ '[' -n '' ']' 2026-04-07 01:37:09.208461 | orchestrator | ++ '[' -n '' ']' 2026-04-07 01:37:09.208500 | orchestrator | ++ hash -r 2026-04-07 01:37:09.208513 | orchestrator | ++ '[' -n '' ']' 2026-04-07 01:37:09.208524 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-07 01:37:09.208535 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-07 01:37:09.208583 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-07 01:37:09.208597 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-07 01:37:09.208609 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-07 01:37:09.208620 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-07 01:37:09.208631 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-07 01:37:09.208644 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 01:37:09.208656 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 01:37:09.208667 | orchestrator | ++ export PATH 2026-04-07 01:37:09.208679 | orchestrator | ++ '[' -n '' ']' 2026-04-07 01:37:09.208691 | orchestrator | ++ '[' -z '' ']' 2026-04-07 01:37:09.208702 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-07 01:37:09.208713 | orchestrator | ++ PS1='(venv) ' 2026-04-07 01:37:09.208724 | orchestrator | ++ export PS1 2026-04-07 01:37:09.208735 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-07 01:37:09.208746 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-07 01:37:09.208757 | orchestrator | ++ hash -r 2026-04-07 01:37:09.208769 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-07 01:37:10.629227 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-07 01:37:10.630240 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-07 01:37:10.631907 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-07 01:37:10.633618 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-07 01:37:10.635168 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-07 01:37:10.646634 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-07 01:37:10.648041 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-07 01:37:10.648923 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-07 01:37:10.650581 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-07 01:37:10.689513 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-07 01:37:10.690782 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-07 01:37:10.692405 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-07 01:37:10.693951 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-07 01:37:10.698222 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-07 01:37:10.936165 | orchestrator | ++ which gilt 2026-04-07 01:37:10.941155 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-07 01:37:10.959828 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-07 01:37:11.254320 | orchestrator | osism.cfg-generics: 2026-04-07 01:37:11.436783 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-07 01:37:11.436886 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-07 01:37:11.437167 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-07 01:37:11.437189 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-07 01:37:12.583699 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-07 01:37:12.596783 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-07 01:37:13.107530 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-07 01:37:13.169532 | orchestrator | ~ 2026-04-07 01:37:13.169656 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 01:37:13.169668 | orchestrator | + deactivate 2026-04-07 01:37:13.169676 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-07 01:37:13.169686 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 01:37:13.169692 | orchestrator | + export PATH 2026-04-07 01:37:13.169699 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-07 01:37:13.169706 | orchestrator | + '[' -n '' ']' 2026-04-07 01:37:13.169715 | orchestrator | + hash -r 2026-04-07 01:37:13.169721 | orchestrator | + '[' -n '' ']' 2026-04-07 01:37:13.169728 | orchestrator | + unset VIRTUAL_ENV 2026-04-07 01:37:13.169734 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-07 01:37:13.169741 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-07 01:37:13.169748 | orchestrator | + unset -f deactivate 2026-04-07 01:37:13.169754 | orchestrator | + popd 2026-04-07 01:37:13.171612 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-07 01:37:13.171698 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-07 01:37:13.171714 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-07 01:37:13.228955 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 01:37:13.229053 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-07 01:37:13.229067 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-07 01:37:13.286326 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 01:37:13.286519 | orchestrator | ++ semver 2024.2 2025.1 2026-04-07 01:37:13.342251 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 01:37:13.342349 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-07 01:37:13.429884 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 01:37:13.429969 | orchestrator | + source /opt/venv/bin/activate 2026-04-07 01:37:13.429980 | orchestrator | ++ deactivate nondestructive 2026-04-07 01:37:13.429989 | orchestrator | ++ '[' -n '' ']' 2026-04-07 01:37:13.429996 | orchestrator | ++ '[' -n '' ']' 2026-04-07 01:37:13.430003 | orchestrator | ++ hash -r 2026-04-07 01:37:13.430011 | orchestrator | ++ '[' -n '' ']' 2026-04-07 01:37:13.430056 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-07 01:37:13.430064 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-07 01:37:13.430072 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-07 01:37:13.430090 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-07 01:37:13.430099 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-07 01:37:13.430106 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-07 01:37:13.430113 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-07 01:37:13.430121 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 01:37:13.430146 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 01:37:13.430154 | orchestrator | ++ export PATH 2026-04-07 01:37:13.430161 | orchestrator | ++ '[' -n '' ']' 2026-04-07 01:37:13.430168 | orchestrator | ++ '[' -z '' ']' 2026-04-07 01:37:13.430175 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-07 01:37:13.430182 | orchestrator | ++ PS1='(venv) ' 2026-04-07 01:37:13.430189 | orchestrator | ++ export PS1 2026-04-07 01:37:13.430196 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-07 01:37:13.430203 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-07 01:37:13.430210 | orchestrator | ++ hash -r 2026-04-07 01:37:13.430217 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-07 01:37:14.855727 | orchestrator | 2026-04-07 01:37:14.855831 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-07 01:37:14.855846 | orchestrator | 2026-04-07 01:37:14.855856 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-07 01:37:15.457357 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:15.457474 | orchestrator | 2026-04-07 01:37:15.457497 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-07 01:37:16.529097 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:16.690140 | orchestrator | 2026-04-07 01:37:16.690228 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-07 01:37:16.690271 | orchestrator | 2026-04-07 01:37:16.690284 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:37:19.163781 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:19.163912 | orchestrator | 2026-04-07 01:37:19.163939 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-07 01:37:19.216339 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:19.216433 | orchestrator | 2026-04-07 01:37:19.216450 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-07 01:37:19.725881 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:19.725983 | orchestrator | 2026-04-07 01:37:19.726002 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-07 01:37:19.779303 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:37:19.779423 | orchestrator | 2026-04-07 01:37:19.779441 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-07 01:37:20.139439 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:20.139534 | orchestrator | 2026-04-07 01:37:20.139607 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-07 01:37:20.506733 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:20.506843 | orchestrator | 2026-04-07 01:37:20.506860 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-07 01:37:20.650965 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:37:20.651053 | orchestrator | 2026-04-07 01:37:20.651066 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-07 01:37:20.651077 | orchestrator | 2026-04-07 01:37:20.651087 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:37:22.526188 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:22.526328 | orchestrator | 2026-04-07 01:37:22.526347 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-07 01:37:22.668104 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-07 01:37:22.668201 | orchestrator | 2026-04-07 01:37:22.668216 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-07 01:37:22.737627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-07 01:37:22.737715 | orchestrator | 2026-04-07 01:37:22.737727 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-07 01:37:23.930478 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-07 01:37:23.930688 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-07 01:37:23.930718 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-07 01:37:23.930738 | orchestrator | 2026-04-07 01:37:23.930762 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-07 01:37:25.920213 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-07 01:37:25.920311 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-07 01:37:25.920324 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-07 01:37:25.920335 | orchestrator | 2026-04-07 01:37:25.920347 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-07 01:37:26.616999 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 01:37:26.617118 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:26.617131 | orchestrator | 2026-04-07 01:37:26.617141 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-07 01:37:27.313754 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 01:37:27.313861 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:27.313878 | orchestrator | 2026-04-07 01:37:27.313895 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-07 01:37:27.382796 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:37:27.382896 | orchestrator | 2026-04-07 01:37:27.382912 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-07 01:37:27.767211 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:27.767334 | orchestrator | 2026-04-07 01:37:27.767353 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-07 01:37:27.859198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-07 01:37:27.859280 | orchestrator | 2026-04-07 01:37:27.859291 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-07 01:37:29.073629 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:29.073739 | orchestrator | 2026-04-07 01:37:29.073759 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-07 01:37:30.028446 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:30.028585 | orchestrator | 2026-04-07 01:37:30.028606 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-07 01:37:44.859728 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:44.859820 | orchestrator | 2026-04-07 01:37:44.859830 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-07 01:37:44.922498 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:37:44.922637 | orchestrator | 2026-04-07 01:37:44.922680 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-07 01:37:44.922691 | orchestrator | 2026-04-07 01:37:44.922699 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:37:46.909607 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:46.909735 | orchestrator | 2026-04-07 01:37:46.909760 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-07 01:37:47.037497 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-07 01:37:47.037614 | orchestrator | 2026-04-07 01:37:47.037630 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-07 01:37:47.106826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 01:37:47.106919 | orchestrator | 2026-04-07 01:37:47.106932 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-07 01:37:49.887522 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:49.887716 | orchestrator | 2026-04-07 01:37:49.887730 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-07 01:37:49.934067 | orchestrator | ok: [testbed-manager] 2026-04-07 01:37:49.934152 | orchestrator | 2026-04-07 01:37:49.934160 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-07 01:37:50.081837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-07 01:37:50.081926 | orchestrator | 2026-04-07 01:37:50.081938 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-07 01:37:53.064780 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-07 01:37:53.064862 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-07 01:37:53.064872 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-07 01:37:53.064879 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-07 01:37:53.064887 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-07 01:37:53.064894 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-07 01:37:53.064901 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-07 01:37:53.064908 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-07 01:37:53.064919 | orchestrator | 2026-04-07 01:37:53.064936 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-07 01:37:53.772698 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:53.772788 | orchestrator | 2026-04-07 01:37:53.772800 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-07 01:37:54.436713 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:54.436855 | orchestrator | 2026-04-07 01:37:54.436874 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-07 01:37:54.513302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-07 01:37:54.513403 | orchestrator | 2026-04-07 01:37:54.513418 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-07 01:37:55.794430 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-07 01:37:55.794616 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-07 01:37:55.794644 | orchestrator | 2026-04-07 01:37:55.794663 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-07 01:37:56.499357 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:56.499431 | orchestrator | 2026-04-07 01:37:56.499438 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-07 01:37:56.565770 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:37:56.565892 | orchestrator | 2026-04-07 01:37:56.565907 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-07 01:37:56.655083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-07 01:37:56.655176 | orchestrator | 2026-04-07 01:37:56.655190 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-07 01:37:57.305070 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:57.305179 | orchestrator | 2026-04-07 01:37:57.305199 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-07 01:37:57.384893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-07 01:37:57.384987 | orchestrator | 2026-04-07 01:37:57.384996 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-07 01:37:58.838088 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 01:37:58.838175 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 01:37:58.838187 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:58.838197 | orchestrator | 2026-04-07 01:37:58.838206 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-07 01:37:59.488374 | orchestrator | changed: [testbed-manager] 2026-04-07 01:37:59.488473 | orchestrator | 2026-04-07 01:37:59.488485 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-07 01:37:59.543251 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:37:59.543354 | orchestrator | 2026-04-07 01:37:59.543370 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-07 01:37:59.648892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-07 01:37:59.649001 | orchestrator | 2026-04-07 01:37:59.649014 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-07 01:38:00.225373 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:00.225458 | orchestrator | 2026-04-07 01:38:00.225471 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-07 01:38:00.669620 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:00.669707 | orchestrator | 2026-04-07 01:38:00.669718 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-07 01:38:02.002324 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-07 01:38:02.002417 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-07 01:38:02.002427 | orchestrator | 2026-04-07 01:38:02.002437 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-07 01:38:02.717802 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:02.717890 | orchestrator | 2026-04-07 01:38:02.717900 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-07 01:38:03.133105 | orchestrator | ok: [testbed-manager] 2026-04-07 01:38:03.133217 | orchestrator | 2026-04-07 01:38:03.133237 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-07 01:38:03.519018 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:03.519091 | orchestrator | 2026-04-07 01:38:03.519098 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-07 01:38:03.554607 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:38:03.554683 | orchestrator | 2026-04-07 01:38:03.554690 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-07 01:38:03.633024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-07 01:38:03.633220 | orchestrator | 2026-04-07 01:38:03.633251 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-07 01:38:03.694335 | orchestrator | ok: [testbed-manager] 2026-04-07 01:38:03.694456 | orchestrator | 2026-04-07 01:38:03.694472 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-07 01:38:05.912015 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-07 01:38:05.912127 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-07 01:38:05.912145 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-07 01:38:05.912159 | orchestrator | 2026-04-07 01:38:05.912192 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-07 01:38:06.662452 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:06.662595 | orchestrator | 2026-04-07 01:38:06.662613 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-07 01:38:07.426959 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:07.427065 | orchestrator | 2026-04-07 01:38:07.427084 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-07 01:38:08.193095 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:08.193221 | orchestrator | 2026-04-07 01:38:08.193240 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-07 01:38:08.284088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-07 01:38:08.284179 | orchestrator | 2026-04-07 01:38:08.284194 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-07 01:38:08.329452 | orchestrator | ok: [testbed-manager] 2026-04-07 01:38:08.329545 | orchestrator | 2026-04-07 01:38:08.329628 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-07 01:38:09.134652 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-07 01:38:09.134738 | orchestrator | 2026-04-07 01:38:09.134748 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-07 01:38:09.228060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-07 01:38:09.228161 | orchestrator | 2026-04-07 01:38:09.228174 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-07 01:38:09.963461 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:09.963611 | orchestrator | 2026-04-07 01:38:09.963630 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-07 01:38:10.625005 | orchestrator | ok: [testbed-manager] 2026-04-07 01:38:10.625138 | orchestrator | 2026-04-07 01:38:10.625171 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-07 01:38:10.684443 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:38:10.684610 | orchestrator | 2026-04-07 01:38:10.684635 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-07 01:38:10.754509 | orchestrator | ok: [testbed-manager] 2026-04-07 01:38:10.754639 | orchestrator | 2026-04-07 01:38:10.754657 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-07 01:38:11.708989 | orchestrator | changed: [testbed-manager] 2026-04-07 01:38:11.709110 | orchestrator | 2026-04-07 01:38:11.709135 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-07 01:39:30.327304 | orchestrator | changed: [testbed-manager] 2026-04-07 01:39:30.327420 | orchestrator | 2026-04-07 01:39:30.327438 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-07 01:39:31.427272 | orchestrator | ok: [testbed-manager] 2026-04-07 01:39:31.427367 | orchestrator | 2026-04-07 01:39:31.427383 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-07 01:39:31.489943 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:39:31.490052 | orchestrator | 2026-04-07 01:39:31.490062 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-07 01:39:34.301779 | orchestrator | changed: [testbed-manager] 2026-04-07 01:39:34.301861 | orchestrator | 2026-04-07 01:39:34.301871 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-07 01:39:34.381472 | orchestrator | ok: [testbed-manager] 2026-04-07 01:39:34.381556 | orchestrator | 2026-04-07 01:39:34.381641 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-07 01:39:34.381656 | orchestrator | 2026-04-07 01:39:34.381667 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-07 01:39:34.564054 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:39:34.564155 | orchestrator | 2026-04-07 01:39:34.564170 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-07 01:40:34.628416 | orchestrator | Pausing for 60 seconds 2026-04-07 01:40:34.628535 | orchestrator | changed: [testbed-manager] 2026-04-07 01:40:34.628558 | orchestrator | 2026-04-07 01:40:34.628608 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-07 01:40:38.280500 | orchestrator | changed: [testbed-manager] 2026-04-07 01:40:38.280639 | orchestrator | 2026-04-07 01:40:38.280661 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-07 01:41:40.464071 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-07 01:41:40.464157 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-07 01:41:40.464183 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-07 01:41:40.464191 | orchestrator | changed: [testbed-manager] 2026-04-07 01:41:40.464199 | orchestrator | 2026-04-07 01:41:40.464207 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-07 01:41:52.363682 | orchestrator | changed: [testbed-manager] 2026-04-07 01:41:52.363828 | orchestrator | 2026-04-07 01:41:52.363858 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-07 01:41:52.479358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-07 01:41:52.479455 | orchestrator | 2026-04-07 01:41:52.479470 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-07 01:41:52.479483 | orchestrator | 2026-04-07 01:41:52.479495 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-07 01:41:52.531410 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:41:52.531507 | orchestrator | 2026-04-07 01:41:52.531527 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-07 01:41:52.634408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-07 01:41:52.634516 | orchestrator | 2026-04-07 01:41:52.634535 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-07 01:41:53.529922 | orchestrator | changed: [testbed-manager] 2026-04-07 01:41:53.530007 | orchestrator | 2026-04-07 01:41:53.530074 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-07 01:41:57.072100 | orchestrator | ok: [testbed-manager] 2026-04-07 01:41:57.072225 | orchestrator | 2026-04-07 01:41:57.072251 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-07 01:41:57.157016 | orchestrator | ok: [testbed-manager] => { 2026-04-07 01:41:57.157130 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-07 01:41:57.157152 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-07 01:41:57.157170 | orchestrator | "Checking running containers against expected versions...", 2026-04-07 01:41:57.157190 | orchestrator | "", 2026-04-07 01:41:57.157208 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-07 01:41:57.157224 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-07 01:41:57.157243 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.157260 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-07 01:41:57.157277 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.157291 | orchestrator | "", 2026-04-07 01:41:57.157307 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-07 01:41:57.157353 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-07 01:41:57.157371 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.157388 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-07 01:41:57.157405 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.157423 | orchestrator | "", 2026-04-07 01:41:57.157440 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-07 01:41:57.157457 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-07 01:41:57.157473 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.157489 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-07 01:41:57.157505 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.157519 | orchestrator | "", 2026-04-07 01:41:57.157536 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-07 01:41:57.157554 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-07 01:41:57.157571 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.157663 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-07 01:41:57.157685 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.157702 | orchestrator | "", 2026-04-07 01:41:57.157721 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-07 01:41:57.157739 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-07 01:41:57.157755 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.157773 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-07 01:41:57.157791 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.157808 | orchestrator | "", 2026-04-07 01:41:57.157825 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-07 01:41:57.157843 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.157860 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.157877 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.157894 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.157911 | orchestrator | "", 2026-04-07 01:41:57.157927 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-07 01:41:57.157943 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-07 01:41:57.157959 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.157977 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-07 01:41:57.157994 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.158011 | orchestrator | "", 2026-04-07 01:41:57.158098 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-07 01:41:57.158115 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-07 01:41:57.158132 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.158148 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-07 01:41:57.158165 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.158183 | orchestrator | "", 2026-04-07 01:41:57.158199 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-07 01:41:57.158263 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-07 01:41:57.158281 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.158298 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-07 01:41:57.158314 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.158330 | orchestrator | "", 2026-04-07 01:41:57.158346 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-07 01:41:57.158363 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-07 01:41:57.158379 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.158394 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-07 01:41:57.158410 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.158425 | orchestrator | "", 2026-04-07 01:41:57.158439 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-07 01:41:57.158469 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158486 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.158501 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158516 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.158532 | orchestrator | "", 2026-04-07 01:41:57.158548 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-07 01:41:57.158564 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158580 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.158667 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158685 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.158702 | orchestrator | "", 2026-04-07 01:41:57.158718 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-07 01:41:57.158733 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158750 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.158766 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158782 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.158798 | orchestrator | "", 2026-04-07 01:41:57.158815 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-07 01:41:57.158830 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158846 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.158862 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158900 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.158914 | orchestrator | "", 2026-04-07 01:41:57.158928 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-07 01:41:57.158941 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158967 | orchestrator | " Enabled: true", 2026-04-07 01:41:57.158981 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-07 01:41:57.158993 | orchestrator | " Status: ✅ MATCH", 2026-04-07 01:41:57.159006 | orchestrator | "", 2026-04-07 01:41:57.159018 | orchestrator | "=== Summary ===", 2026-04-07 01:41:57.159032 | orchestrator | "Errors (version mismatches): 0", 2026-04-07 01:41:57.159043 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-07 01:41:57.159054 | orchestrator | "", 2026-04-07 01:41:57.159067 | orchestrator | "✅ All running containers match expected versions!" 2026-04-07 01:41:57.159080 | orchestrator | ] 2026-04-07 01:41:57.159094 | orchestrator | } 2026-04-07 01:41:57.159107 | orchestrator | 2026-04-07 01:41:57.159120 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-07 01:41:57.202548 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:41:57.202731 | orchestrator | 2026-04-07 01:41:57.202751 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:41:57.202766 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-07 01:41:57.202778 | orchestrator | 2026-04-07 01:41:57.323383 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 01:41:57.323463 | orchestrator | + deactivate 2026-04-07 01:41:57.323478 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-07 01:41:57.323490 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 01:41:57.323501 | orchestrator | + export PATH 2026-04-07 01:41:57.323513 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-07 01:41:57.323524 | orchestrator | + '[' -n '' ']' 2026-04-07 01:41:57.323535 | orchestrator | + hash -r 2026-04-07 01:41:57.323544 | orchestrator | + '[' -n '' ']' 2026-04-07 01:41:57.323551 | orchestrator | + unset VIRTUAL_ENV 2026-04-07 01:41:57.323557 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-07 01:41:57.323564 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-07 01:41:57.323571 | orchestrator | + unset -f deactivate 2026-04-07 01:41:57.323578 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-07 01:41:57.334646 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-07 01:41:57.334723 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-07 01:41:57.334753 | orchestrator | + local max_attempts=60 2026-04-07 01:41:57.334762 | orchestrator | + local name=ceph-ansible 2026-04-07 01:41:57.334771 | orchestrator | + local attempt_num=1 2026-04-07 01:41:57.335751 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 01:41:57.373775 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 01:41:57.373855 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-07 01:41:57.373865 | orchestrator | + local max_attempts=60 2026-04-07 01:41:57.373872 | orchestrator | + local name=kolla-ansible 2026-04-07 01:41:57.373878 | orchestrator | + local attempt_num=1 2026-04-07 01:41:57.374817 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-07 01:41:57.418227 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 01:41:57.418295 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-07 01:41:57.418302 | orchestrator | + local max_attempts=60 2026-04-07 01:41:57.418307 | orchestrator | + local name=osism-ansible 2026-04-07 01:41:57.418311 | orchestrator | + local attempt_num=1 2026-04-07 01:41:57.419115 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-07 01:41:57.457947 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 01:41:57.458013 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-07 01:41:57.458063 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-07 01:41:58.229904 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-07 01:41:58.416805 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-07 01:41:58.416934 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-07 01:41:58.416962 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-07 01:41:58.416984 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-07 01:41:58.417007 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-07 01:41:58.417054 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-07 01:41:58.417077 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-07 01:41:58.417097 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-07 01:41:58.417116 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-07 01:41:58.417136 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-07 01:41:58.417157 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-07 01:41:58.417176 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-07 01:41:58.417196 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-07 01:41:58.417361 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-07 01:41:58.417389 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-07 01:41:58.417655 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-07 01:41:58.423503 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-07 01:41:58.486800 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 01:41:58.486915 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-07 01:41:58.491430 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-07 01:42:10.881936 | orchestrator | 2026-04-07 01:42:10 | INFO  | Task 64fa4134-a455-4e93-9d0c-cc4c3f5c1682 (resolvconf) was prepared for execution. 2026-04-07 01:42:10.882099 | orchestrator | 2026-04-07 01:42:10 | INFO  | It takes a moment until task 64fa4134-a455-4e93-9d0c-cc4c3f5c1682 (resolvconf) has been started and output is visible here. 2026-04-07 01:42:26.067311 | orchestrator | 2026-04-07 01:42:26.067416 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-07 01:42:26.067430 | orchestrator | 2026-04-07 01:42:26.067440 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:42:26.067450 | orchestrator | Tuesday 07 April 2026 01:42:15 +0000 (0:00:00.159) 0:00:00.159 ********* 2026-04-07 01:42:26.067459 | orchestrator | ok: [testbed-manager] 2026-04-07 01:42:26.067469 | orchestrator | 2026-04-07 01:42:26.067478 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-07 01:42:26.067487 | orchestrator | Tuesday 07 April 2026 01:42:19 +0000 (0:00:04.018) 0:00:04.178 ********* 2026-04-07 01:42:26.067497 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:42:26.067508 | orchestrator | 2026-04-07 01:42:26.067517 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-07 01:42:26.067526 | orchestrator | Tuesday 07 April 2026 01:42:19 +0000 (0:00:00.072) 0:00:04.251 ********* 2026-04-07 01:42:26.067535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-07 01:42:26.067546 | orchestrator | 2026-04-07 01:42:26.067555 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-07 01:42:26.067565 | orchestrator | Tuesday 07 April 2026 01:42:19 +0000 (0:00:00.086) 0:00:04.337 ********* 2026-04-07 01:42:26.067670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 01:42:26.067685 | orchestrator | 2026-04-07 01:42:26.067696 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-07 01:42:26.067705 | orchestrator | Tuesday 07 April 2026 01:42:19 +0000 (0:00:00.081) 0:00:04.419 ********* 2026-04-07 01:42:26.067715 | orchestrator | ok: [testbed-manager] 2026-04-07 01:42:26.067725 | orchestrator | 2026-04-07 01:42:26.067735 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-07 01:42:26.067745 | orchestrator | Tuesday 07 April 2026 01:42:20 +0000 (0:00:01.225) 0:00:05.644 ********* 2026-04-07 01:42:26.067754 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:42:26.067763 | orchestrator | 2026-04-07 01:42:26.067772 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-07 01:42:26.067781 | orchestrator | Tuesday 07 April 2026 01:42:20 +0000 (0:00:00.058) 0:00:05.703 ********* 2026-04-07 01:42:26.067812 | orchestrator | ok: [testbed-manager] 2026-04-07 01:42:26.067822 | orchestrator | 2026-04-07 01:42:26.067831 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-07 01:42:26.067841 | orchestrator | Tuesday 07 April 2026 01:42:21 +0000 (0:00:00.526) 0:00:06.230 ********* 2026-04-07 01:42:26.067850 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:42:26.067859 | orchestrator | 2026-04-07 01:42:26.067869 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-07 01:42:26.067880 | orchestrator | Tuesday 07 April 2026 01:42:21 +0000 (0:00:00.085) 0:00:06.316 ********* 2026-04-07 01:42:26.067889 | orchestrator | changed: [testbed-manager] 2026-04-07 01:42:26.067899 | orchestrator | 2026-04-07 01:42:26.067909 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-07 01:42:26.067919 | orchestrator | Tuesday 07 April 2026 01:42:21 +0000 (0:00:00.554) 0:00:06.871 ********* 2026-04-07 01:42:26.067928 | orchestrator | changed: [testbed-manager] 2026-04-07 01:42:26.067937 | orchestrator | 2026-04-07 01:42:26.067947 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-07 01:42:26.067956 | orchestrator | Tuesday 07 April 2026 01:42:23 +0000 (0:00:01.176) 0:00:08.048 ********* 2026-04-07 01:42:26.067966 | orchestrator | ok: [testbed-manager] 2026-04-07 01:42:26.067975 | orchestrator | 2026-04-07 01:42:26.067985 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-07 01:42:26.067995 | orchestrator | Tuesday 07 April 2026 01:42:24 +0000 (0:00:01.067) 0:00:09.115 ********* 2026-04-07 01:42:26.068005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-07 01:42:26.068015 | orchestrator | 2026-04-07 01:42:26.068025 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-07 01:42:26.068035 | orchestrator | Tuesday 07 April 2026 01:42:24 +0000 (0:00:00.082) 0:00:09.197 ********* 2026-04-07 01:42:26.068044 | orchestrator | changed: [testbed-manager] 2026-04-07 01:42:26.068053 | orchestrator | 2026-04-07 01:42:26.068061 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:42:26.068071 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 01:42:26.068079 | orchestrator | 2026-04-07 01:42:26.068088 | orchestrator | 2026-04-07 01:42:26.068097 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:42:26.068107 | orchestrator | Tuesday 07 April 2026 01:42:25 +0000 (0:00:01.455) 0:00:10.652 ********* 2026-04-07 01:42:26.068115 | orchestrator | =============================================================================== 2026-04-07 01:42:26.068125 | orchestrator | Gathering Facts --------------------------------------------------------- 4.02s 2026-04-07 01:42:26.068134 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.46s 2026-04-07 01:42:26.068143 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.23s 2026-04-07 01:42:26.068153 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.18s 2026-04-07 01:42:26.068162 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.07s 2026-04-07 01:42:26.068172 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-04-07 01:42:26.068202 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-04-07 01:42:26.068213 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-04-07 01:42:26.068222 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-04-07 01:42:26.068231 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-07 01:42:26.068241 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-04-07 01:42:26.068252 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-04-07 01:42:26.068270 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-07 01:42:26.423054 | orchestrator | + osism apply sshconfig 2026-04-07 01:42:38.564927 | orchestrator | 2026-04-07 01:42:38 | INFO  | Task 05065e2f-41e2-450a-a49b-2668be565a8c (sshconfig) was prepared for execution. 2026-04-07 01:42:38.565049 | orchestrator | 2026-04-07 01:42:38 | INFO  | It takes a moment until task 05065e2f-41e2-450a-a49b-2668be565a8c (sshconfig) has been started and output is visible here. 2026-04-07 01:42:51.273447 | orchestrator | 2026-04-07 01:42:51.273565 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-07 01:42:51.273582 | orchestrator | 2026-04-07 01:42:51.273718 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-07 01:42:51.273739 | orchestrator | Tuesday 07 April 2026 01:42:43 +0000 (0:00:00.179) 0:00:00.179 ********* 2026-04-07 01:42:51.273751 | orchestrator | ok: [testbed-manager] 2026-04-07 01:42:51.273764 | orchestrator | 2026-04-07 01:42:51.273776 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-07 01:42:51.273788 | orchestrator | Tuesday 07 April 2026 01:42:43 +0000 (0:00:00.587) 0:00:00.767 ********* 2026-04-07 01:42:51.273800 | orchestrator | changed: [testbed-manager] 2026-04-07 01:42:51.273813 | orchestrator | 2026-04-07 01:42:51.273825 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-07 01:42:51.273836 | orchestrator | Tuesday 07 April 2026 01:42:44 +0000 (0:00:00.590) 0:00:01.357 ********* 2026-04-07 01:42:51.273848 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-07 01:42:51.273860 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-07 01:42:51.273872 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-07 01:42:51.273883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-07 01:42:51.273895 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-07 01:42:51.273906 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-07 01:42:51.273918 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-07 01:42:51.273929 | orchestrator | 2026-04-07 01:42:51.273941 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-07 01:42:51.273952 | orchestrator | Tuesday 07 April 2026 01:42:50 +0000 (0:00:06.070) 0:00:07.427 ********* 2026-04-07 01:42:51.273966 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:42:51.273979 | orchestrator | 2026-04-07 01:42:51.273992 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-07 01:42:51.274005 | orchestrator | Tuesday 07 April 2026 01:42:50 +0000 (0:00:00.085) 0:00:07.513 ********* 2026-04-07 01:42:51.274080 | orchestrator | changed: [testbed-manager] 2026-04-07 01:42:51.274101 | orchestrator | 2026-04-07 01:42:51.274122 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:42:51.274143 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 01:42:51.274164 | orchestrator | 2026-04-07 01:42:51.274176 | orchestrator | 2026-04-07 01:42:51.274188 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:42:51.274199 | orchestrator | Tuesday 07 April 2026 01:42:50 +0000 (0:00:00.616) 0:00:08.129 ********* 2026-04-07 01:42:51.274211 | orchestrator | =============================================================================== 2026-04-07 01:42:51.274223 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.07s 2026-04-07 01:42:51.274234 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.62s 2026-04-07 01:42:51.274246 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.59s 2026-04-07 01:42:51.274257 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2026-04-07 01:42:51.274295 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-04-07 01:42:51.694687 | orchestrator | + osism apply known-hosts 2026-04-07 01:43:03.772934 | orchestrator | 2026-04-07 01:43:03 | INFO  | Task bd47620e-0361-4071-b11f-49f7c187df09 (known-hosts) was prepared for execution. 2026-04-07 01:43:03.773046 | orchestrator | 2026-04-07 01:43:03 | INFO  | It takes a moment until task bd47620e-0361-4071-b11f-49f7c187df09 (known-hosts) has been started and output is visible here. 2026-04-07 01:43:21.660522 | orchestrator | 2026-04-07 01:43:21.660697 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-07 01:43:21.660724 | orchestrator | 2026-04-07 01:43:21.660738 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-07 01:43:21.660750 | orchestrator | Tuesday 07 April 2026 01:43:08 +0000 (0:00:00.170) 0:00:00.170 ********* 2026-04-07 01:43:21.660763 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-07 01:43:21.660775 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-07 01:43:21.660786 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-07 01:43:21.660798 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-07 01:43:21.660809 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-07 01:43:21.660821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-07 01:43:21.660832 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-07 01:43:21.660843 | orchestrator | 2026-04-07 01:43:21.660855 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-07 01:43:21.660867 | orchestrator | Tuesday 07 April 2026 01:43:14 +0000 (0:00:06.270) 0:00:06.441 ********* 2026-04-07 01:43:21.660880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-07 01:43:21.660893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-07 01:43:21.660905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-07 01:43:21.660916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-07 01:43:21.660928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-07 01:43:21.660950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-07 01:43:21.660963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-07 01:43:21.660974 | orchestrator | 2026-04-07 01:43:21.660986 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:21.660997 | orchestrator | Tuesday 07 April 2026 01:43:14 +0000 (0:00:00.189) 0:00:06.630 ********* 2026-04-07 01:43:21.661009 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILSy91i1w3Q5R7SjcrRecODHON/BhwXiiAGWtdamGJr2) 2026-04-07 01:43:21.661029 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3hPbiyuDjCUD7KkY+TC1Bekqze25Exa8zLaklrc0eAZyqmezeRvz/MngXMklZIXPgnLCcTXW6G1JbE0Z7Uwc2xZ0ixe0cu3abnlE1dc7ji4mkyhdqDut42R71XJtBJ5u34MbRPc/canMHuRM2fNcGvekSHW4p81fEqr3+I3skZkcc1Buy9YxpgvjI64S8OMhLdL1mpBQ/g3qut0nv7gWwGGmKcAWE/lWQPFxW3ScGxS0UU4Lqf4e0sTXBiUE+IUjPeKso62X8sw76HjIwZcsrKoxgjDtL+ouHNLd7SeZXOE8QNZs9mvPjaQHJPhNG6aE2MY1TxqLM27y5KH5WKqrTgtlTFYJ/10+v1lH3Czodl4Vk+/j0peoBV6FMBl0/QvPVqyTsXjsoDyFIXvvQoSBv8mVk5tIksUHAreuECZr7FeUs4VXYQw10d06omUj8FyG2jXDc/WzRZ+zIsazHPZQoPApxlcsc7r2MZolzfzuMfxGZO7ROUiUfgpz6nGeAaZs=) 2026-04-07 01:43:21.661068 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMeZTUAIdZePj+Ukq1VvUM8+fpkp1T2ie9wOpWUc1EaTceu7gQy7CDQnDiOj/wSAok/J7JQiBfI4zCNowXX8CEM=) 2026-04-07 01:43:21.661084 | orchestrator | 2026-04-07 01:43:21.661097 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:21.661111 | orchestrator | Tuesday 07 April 2026 01:43:15 +0000 (0:00:01.262) 0:00:07.893 ********* 2026-04-07 01:43:21.661144 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUw8UWSUFRvuEuQCzaH1E0zlTRNNmqGKfW2rTC+jOeCas1znA/toQbwWXSJd0LgpeNhkW98XOnXCXl3MOe3sK0c6mVgbpprh3fKcwQj0CTJnfoErhx5K5R9Q2vzbmUS7mrJ6GPXFFaP4mwmkGZSX9tmCzycxIUg+jRR2YAab/hSl4qnt6kNuxM6DbSfpsHLxXQaptnIaqmXNOwuZLZbBWeJiP0uVdpXRXqeyb057MHPeURS2XYLQNDFBKOZNg4Zyq1chWOy4D7pCd7EDI7pyGRHYy8q9BbmB/H2r9LOPSZ26WgKO2jJBOrVB0piKxWOQE0tTm6rjJoLmOFgoLD06qr8to+hJ1lJyKixByTGTgkAZjIfXPOO5P+x6iVugQlOBvtGAQ2tbvMYWCYnpynpw84zWiXLrtGF8hayxKIOrfLTto6NPvaUdpITzbqLO+1EsxazbqhEb178iH9ZltPly7G1TGYHhMa273gORxhn0UelHNHWJwBWZN5TegUp/HJFjU=) 2026-04-07 01:43:21.661159 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCMfqzDBCxsoHYgBGEDdnHpeBGo8Q7gwwdzyu3wyTyMDmrnBq7TReIVCWs62UmwyfcF4n3A+sjTclEv5wPALKIM=) 2026-04-07 01:43:21.661172 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG9phIBFXlyQIcjmB7YLXIX+aylXufKGNWnwkooXtFAZ) 2026-04-07 01:43:21.661185 | orchestrator | 2026-04-07 01:43:21.661198 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:21.661211 | orchestrator | Tuesday 07 April 2026 01:43:17 +0000 (0:00:01.113) 0:00:09.007 ********* 2026-04-07 01:43:21.661224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCja6vDH+l/j9+9h9bloEhkFlhyETGohta0YNo9Qu2Rh8ZsyEJuFFCM815UqOC8Vsm5fzgO9/G7THXj4XYDDMMuta6i8/CjWjsgt7dodgmcI4pSiugVU0sPY0GufRbZ4kSM5/ekiDbTMhnxVhtiSDR69x7IzzIe8NuDEEadkPSV5B3zeDaaftaa9Eej6FUeP1puYRuYtxlzshJjVGqv6A8e+9iBY3l7fYY/GW7hPXH5wFbW4KmIrwCukFmf7U4hbJFYGqhS2VR0DnXEPhJPAxSNKrPTeKMf+fowglqLgiNrH8poCloGIZ9wDwYjSna25hiZDfJPLaIPff5kt1Afl4eDn/nkliTIswBrxMwgUMRhUkt1YQy+uJy0BBj2cHworjg4uGNbKs0q4n477SpsrJ1FqZ7IV65fza+M3xMkKLZjU3OC9QiXh5HWAgv8H/N2ntJ+oRUJYRfQKHdVX2DMK+X+5+okesHLHa/ku+BGpXCGlyDciH8GwMMezi6KG+uZmP0=) 2026-04-07 01:43:21.661237 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDK0kqbCdDjPNvRjfpvFfjfNMrPV59Z8bd5K/5HzaidvC4IadXDDJfgpHDk7UcnmHJeZCqnIHr1Y0QZBIfkbDnE=) 2026-04-07 01:43:21.661250 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFdcj7YjRPkCGWiLmH8GOm5johlvzSeAkBRGb675o3xQ) 2026-04-07 01:43:21.661263 | orchestrator | 2026-04-07 01:43:21.661277 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:21.661289 | orchestrator | Tuesday 07 April 2026 01:43:18 +0000 (0:00:01.157) 0:00:10.164 ********* 2026-04-07 01:43:21.661303 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAGxt590o6UZ/zFFbtUzw5CsHLwQKwcJvrFTubT2aHK8xmlz6CNKR1g0x8zAB3moBTWGXvn807y37mi2b9hOG2g=) 2026-04-07 01:43:21.661317 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGTl27jkbJLG7ADsiiUqr/J4QntPv9DVnXaMWOq9ZjnArXN4vxG8EjFvfJm6ltXWJBm0l8Oqa+0Yjo8w5RGa7iIyJUax8gCExwIsoiVZ5beF1u2AnRojFInhgm0NCRUXXHg6LB/lN+dlDMgUIwHEHbOiPFpZi+at7YJVru0+n+cgTjvFESAW/StI5aAakzf1VmSrG2B3XbpuB6u4BqqEoktT/uXOsaM9UshMfVnUZMKq6l9HgJgG3/NgxL6KCRM0PGYGy9N9VSF6l0Yeb8BtgBBYUUAN+nkQm7UrxBDmkeu/3d0hQZ650ODl5mbXg1INDNBRXwUoNT4ikBK0Iis4suRED9QKRtSoiAEJKEv5/RFK7Wy1puOxtF/+/c7oUxfnpOZEu7tilkATYR0RMDvCiuo42sibWLgT0UmYs9nRLBQrTC9c2vb69kcb+UaNwG5socp2Oy+V9HPk87Pdvag4GAKVe+s9XhsZM5xVqKgIh+iUVei/g9+gKaBwX5LlMdwtk=) 2026-04-07 01:43:21.661338 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBT4xLNFp1/a6K20+1YF5eykDVKg8ly/t7MUA6YHPxrv) 2026-04-07 01:43:21.661351 | orchestrator | 2026-04-07 01:43:21.661364 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:21.661376 | orchestrator | Tuesday 07 April 2026 01:43:19 +0000 (0:00:01.164) 0:00:11.329 ********* 2026-04-07 01:43:21.661464 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJa1mHxWvMlgrN8Tuc1KE3t38yv0N+e6otm7FOqAVdXG) 2026-04-07 01:43:21.661476 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq2LvlCyfz2u4kKkB8C7UZRpl5h1EyhYiigArlS/g1JSdVDz/eJmdbWCJrXpjBeapPe+emguUUbiiY8seSlI/xj9xW/DK2Q5KDPOJy5o8fGQqre0G31BIEv6zKucWQLFnWiTRMAd+bIlgddLesgbW5vIi5P160/0Br0n171hSalRUewHSasKVaOxNMtm/Y7N8w8inpoLnC+oukioveTqaVJVvY2MSwfcAJfqRczm01eNJS/x869TS+3ViBEcykmPK4BZkB70rnPiK1C+lCFLAR/jULp8O2r2+enCiNDhTD2ZIhw18xaEQEkUVBaQg13hOO2b1Y8aEvBQoOzdQqK8Ha3t2wpSKEG5ATuCGFt/24xnJLtKbkp+Ms/2fl8ZM+no26EqnL06WdbHybUGwtUGvs4imyz0OkHkwBLuU734EcfxBy/ctKdubQb38thchVnshC7XSdEaxZT6Su23ysOOT48q4fDkYee9hiitFe6j1Q/dSz7OI6QipFdd6wrh8caC0=) 2026-04-07 01:43:21.661488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCH2IZrQXmYDXwMxWtO6dMU4bvGnz7Eix8mlHa7J0xbxOZZuvwWURIHzK6KiGBKy71XHafKYvamZphv8ilAVfWU=) 2026-04-07 01:43:21.661500 | orchestrator | 2026-04-07 01:43:21.661511 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:21.661523 | orchestrator | Tuesday 07 April 2026 01:43:20 +0000 (0:00:01.124) 0:00:12.454 ********* 2026-04-07 01:43:21.661543 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMPptej3zsZEY+P1TBzpZoV3Qy9t4xTIwD+suTxXqt+o6R/+eumMChrMk/3DIP6PvKU7vtL73C3LlSKG8JUWghEqeqXTqDa2DnDA8KrRnT+gaHmN1n+dFSGqXQVQAcRIysNlMS/R4u4uMiRcCX5RLik/HafmFFqpmZcpCl2ACEyFjCR+0lVtLMZ/sE2onHRf4y18nAVxH0mCi8KBifSNBrdUTfYHkpw3IEIv2yEV8kq+Y5IrLol+aDbZT4OOEKwIej/n76yKJAK0ZOjHbtYvr4YkjlvLQJe9DtoaAD2zR4gMYEXdLpEuLNLbVboty36hrGAFweLhO9HtXkLv/xdFbq+4JyxKRYc5nyFX9K9nHJGNuPsjIhMhxOY6FyHXsK1t3pbWN2XrTfJGfR59vedOYWNa6imfG1TZ6sJHnhycehBfsepBrLgAxYN5H4guq86HtiB+09kfmyzEwP1WLTMLSjBaqsN8icWTgmtyZso8S+YNaj+luqkfc3MErGVF+arw0=) 2026-04-07 01:43:33.313294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEdEfccpXI5eDxqZsix+iQEQ8qK5uCyk05qUvhyJ5QQ/7x+zWM6IdV+31Qn/MmyeuQz2csaciWIF5vEQ7Gd4cwE=) 2026-04-07 01:43:33.313438 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKCL/LMKquYsr+sU6CLtTcIbSATQGrr2XnYckAUKbCOt) 2026-04-07 01:43:33.313470 | orchestrator | 2026-04-07 01:43:33.313486 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:33.313500 | orchestrator | Tuesday 07 April 2026 01:43:21 +0000 (0:00:01.168) 0:00:13.622 ********* 2026-04-07 01:43:33.313515 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLBvMJv+F8m3P9oooJRYOi5+cFUWD4nhEmk0YSOSkiasJnsLXKxCZJmJTYQcxETDmMwWyiefPTMats0H4mhfPX8KlOaMV5W7+AJpfxH8xSC8CYJC/83w0t02Zxqi7WwWy61GZG1S3L0gKvpDjqnxVpsdQ8JLBsTEXKvKYfBuUn2hYrUD40UA4SgKrfpsoTSTXOEIbKsgB/EjFy96XA46U7kITmHgRnZqS2yFdvTLL393eZX6woGvWM7IPSikQ7qT5srkrmR69qDSbxI9qZvJkXb+6O7X5y8vHE6JGDHaFilJdwqsZJutiw62vXg5NkxXzEebxfNMntmH99gy+eVEQ7VYOrzfTEnpiMdfpgNmSmpTRzCcJuvAFktT0BPxVxzBcB9m9MlJbcAs81Zj9Jq89EfWqnlyV1lX9Qovu92Q0Wl4m4vtgp660iaX6nH15CHPhMpg8OE3h0BhL6aDE/XT3s325DWW6HGR2luVfZuwnWkd+Z5+skQLJWJYVeNC0aKDE=) 2026-04-07 01:43:33.313530 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPNhHBkIo4j5VXlwQpHXmq8iXXCaqaMADPPx+hpnVg+3qAVDRLwDVGwD+98nWzkVEvVeGEmUt9Vlz57g/tF9Dhg=) 2026-04-07 01:43:33.313571 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMx8yg0atvLR1sXe94G9tNm5H/2T+gtnBrAtn6G8L6q) 2026-04-07 01:43:33.313584 | orchestrator | 2026-04-07 01:43:33.313595 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-07 01:43:33.313641 | orchestrator | Tuesday 07 April 2026 01:43:22 +0000 (0:00:01.152) 0:00:14.774 ********* 2026-04-07 01:43:33.313654 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-07 01:43:33.313666 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-07 01:43:33.313677 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-07 01:43:33.313689 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-07 01:43:33.313700 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-07 01:43:33.313712 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-07 01:43:33.313723 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-07 01:43:33.313734 | orchestrator | 2026-04-07 01:43:33.313746 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-07 01:43:33.313758 | orchestrator | Tuesday 07 April 2026 01:43:28 +0000 (0:00:05.549) 0:00:20.324 ********* 2026-04-07 01:43:33.313771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-07 01:43:33.313785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-07 01:43:33.313797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-07 01:43:33.313808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-07 01:43:33.313820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-07 01:43:33.313833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-07 01:43:33.313846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-07 01:43:33.313859 | orchestrator | 2026-04-07 01:43:33.313871 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:33.313884 | orchestrator | Tuesday 07 April 2026 01:43:28 +0000 (0:00:00.200) 0:00:20.525 ********* 2026-04-07 01:43:33.313943 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3hPbiyuDjCUD7KkY+TC1Bekqze25Exa8zLaklrc0eAZyqmezeRvz/MngXMklZIXPgnLCcTXW6G1JbE0Z7Uwc2xZ0ixe0cu3abnlE1dc7ji4mkyhdqDut42R71XJtBJ5u34MbRPc/canMHuRM2fNcGvekSHW4p81fEqr3+I3skZkcc1Buy9YxpgvjI64S8OMhLdL1mpBQ/g3qut0nv7gWwGGmKcAWE/lWQPFxW3ScGxS0UU4Lqf4e0sTXBiUE+IUjPeKso62X8sw76HjIwZcsrKoxgjDtL+ouHNLd7SeZXOE8QNZs9mvPjaQHJPhNG6aE2MY1TxqLM27y5KH5WKqrTgtlTFYJ/10+v1lH3Czodl4Vk+/j0peoBV6FMBl0/QvPVqyTsXjsoDyFIXvvQoSBv8mVk5tIksUHAreuECZr7FeUs4VXYQw10d06omUj8FyG2jXDc/WzRZ+zIsazHPZQoPApxlcsc7r2MZolzfzuMfxGZO7ROUiUfgpz6nGeAaZs=) 2026-04-07 01:43:33.313973 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMeZTUAIdZePj+Ukq1VvUM8+fpkp1T2ie9wOpWUc1EaTceu7gQy7CDQnDiOj/wSAok/J7JQiBfI4zCNowXX8CEM=) 2026-04-07 01:43:33.314005 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILSy91i1w3Q5R7SjcrRecODHON/BhwXiiAGWtdamGJr2) 2026-04-07 01:43:33.314098 | orchestrator | 2026-04-07 01:43:33.314120 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:33.314139 | orchestrator | Tuesday 07 April 2026 01:43:29 +0000 (0:00:01.199) 0:00:21.725 ********* 2026-04-07 01:43:33.314158 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCMfqzDBCxsoHYgBGEDdnHpeBGo8Q7gwwdzyu3wyTyMDmrnBq7TReIVCWs62UmwyfcF4n3A+sjTclEv5wPALKIM=) 2026-04-07 01:43:33.314176 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG9phIBFXlyQIcjmB7YLXIX+aylXufKGNWnwkooXtFAZ) 2026-04-07 01:43:33.314197 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUw8UWSUFRvuEuQCzaH1E0zlTRNNmqGKfW2rTC+jOeCas1znA/toQbwWXSJd0LgpeNhkW98XOnXCXl3MOe3sK0c6mVgbpprh3fKcwQj0CTJnfoErhx5K5R9Q2vzbmUS7mrJ6GPXFFaP4mwmkGZSX9tmCzycxIUg+jRR2YAab/hSl4qnt6kNuxM6DbSfpsHLxXQaptnIaqmXNOwuZLZbBWeJiP0uVdpXRXqeyb057MHPeURS2XYLQNDFBKOZNg4Zyq1chWOy4D7pCd7EDI7pyGRHYy8q9BbmB/H2r9LOPSZ26WgKO2jJBOrVB0piKxWOQE0tTm6rjJoLmOFgoLD06qr8to+hJ1lJyKixByTGTgkAZjIfXPOO5P+x6iVugQlOBvtGAQ2tbvMYWCYnpynpw84zWiXLrtGF8hayxKIOrfLTto6NPvaUdpITzbqLO+1EsxazbqhEb178iH9ZltPly7G1TGYHhMa273gORxhn0UelHNHWJwBWZN5TegUp/HJFjU=) 2026-04-07 01:43:33.314217 | orchestrator | 2026-04-07 01:43:33.314236 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:33.314255 | orchestrator | Tuesday 07 April 2026 01:43:30 +0000 (0:00:01.154) 0:00:22.879 ********* 2026-04-07 01:43:33.314276 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFdcj7YjRPkCGWiLmH8GOm5johlvzSeAkBRGb675o3xQ) 2026-04-07 01:43:33.314295 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCja6vDH+l/j9+9h9bloEhkFlhyETGohta0YNo9Qu2Rh8ZsyEJuFFCM815UqOC8Vsm5fzgO9/G7THXj4XYDDMMuta6i8/CjWjsgt7dodgmcI4pSiugVU0sPY0GufRbZ4kSM5/ekiDbTMhnxVhtiSDR69x7IzzIe8NuDEEadkPSV5B3zeDaaftaa9Eej6FUeP1puYRuYtxlzshJjVGqv6A8e+9iBY3l7fYY/GW7hPXH5wFbW4KmIrwCukFmf7U4hbJFYGqhS2VR0DnXEPhJPAxSNKrPTeKMf+fowglqLgiNrH8poCloGIZ9wDwYjSna25hiZDfJPLaIPff5kt1Afl4eDn/nkliTIswBrxMwgUMRhUkt1YQy+uJy0BBj2cHworjg4uGNbKs0q4n477SpsrJ1FqZ7IV65fza+M3xMkKLZjU3OC9QiXh5HWAgv8H/N2ntJ+oRUJYRfQKHdVX2DMK+X+5+okesHLHa/ku+BGpXCGlyDciH8GwMMezi6KG+uZmP0=) 2026-04-07 01:43:33.314316 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDK0kqbCdDjPNvRjfpvFfjfNMrPV59Z8bd5K/5HzaidvC4IadXDDJfgpHDk7UcnmHJeZCqnIHr1Y0QZBIfkbDnE=) 2026-04-07 01:43:33.314334 | orchestrator | 2026-04-07 01:43:33.314353 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:33.314371 | orchestrator | Tuesday 07 April 2026 01:43:32 +0000 (0:00:01.250) 0:00:24.130 ********* 2026-04-07 01:43:33.314391 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGTl27jkbJLG7ADsiiUqr/J4QntPv9DVnXaMWOq9ZjnArXN4vxG8EjFvfJm6ltXWJBm0l8Oqa+0Yjo8w5RGa7iIyJUax8gCExwIsoiVZ5beF1u2AnRojFInhgm0NCRUXXHg6LB/lN+dlDMgUIwHEHbOiPFpZi+at7YJVru0+n+cgTjvFESAW/StI5aAakzf1VmSrG2B3XbpuB6u4BqqEoktT/uXOsaM9UshMfVnUZMKq6l9HgJgG3/NgxL6KCRM0PGYGy9N9VSF6l0Yeb8BtgBBYUUAN+nkQm7UrxBDmkeu/3d0hQZ650ODl5mbXg1INDNBRXwUoNT4ikBK0Iis4suRED9QKRtSoiAEJKEv5/RFK7Wy1puOxtF/+/c7oUxfnpOZEu7tilkATYR0RMDvCiuo42sibWLgT0UmYs9nRLBQrTC9c2vb69kcb+UaNwG5socp2Oy+V9HPk87Pdvag4GAKVe+s9XhsZM5xVqKgIh+iUVei/g9+gKaBwX5LlMdwtk=) 2026-04-07 01:43:33.314411 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAGxt590o6UZ/zFFbtUzw5CsHLwQKwcJvrFTubT2aHK8xmlz6CNKR1g0x8zAB3moBTWGXvn807y37mi2b9hOG2g=) 2026-04-07 01:43:33.314452 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBT4xLNFp1/a6K20+1YF5eykDVKg8ly/t7MUA6YHPxrv) 2026-04-07 01:43:38.241274 | orchestrator | 2026-04-07 01:43:38.241388 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:38.241433 | orchestrator | Tuesday 07 April 2026 01:43:33 +0000 (0:00:01.146) 0:00:25.276 ********* 2026-04-07 01:43:38.241446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJa1mHxWvMlgrN8Tuc1KE3t38yv0N+e6otm7FOqAVdXG) 2026-04-07 01:43:38.241461 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq2LvlCyfz2u4kKkB8C7UZRpl5h1EyhYiigArlS/g1JSdVDz/eJmdbWCJrXpjBeapPe+emguUUbiiY8seSlI/xj9xW/DK2Q5KDPOJy5o8fGQqre0G31BIEv6zKucWQLFnWiTRMAd+bIlgddLesgbW5vIi5P160/0Br0n171hSalRUewHSasKVaOxNMtm/Y7N8w8inpoLnC+oukioveTqaVJVvY2MSwfcAJfqRczm01eNJS/x869TS+3ViBEcykmPK4BZkB70rnPiK1C+lCFLAR/jULp8O2r2+enCiNDhTD2ZIhw18xaEQEkUVBaQg13hOO2b1Y8aEvBQoOzdQqK8Ha3t2wpSKEG5ATuCGFt/24xnJLtKbkp+Ms/2fl8ZM+no26EqnL06WdbHybUGwtUGvs4imyz0OkHkwBLuU734EcfxBy/ctKdubQb38thchVnshC7XSdEaxZT6Su23ysOOT48q4fDkYee9hiitFe6j1Q/dSz7OI6QipFdd6wrh8caC0=) 2026-04-07 01:43:38.241475 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCH2IZrQXmYDXwMxWtO6dMU4bvGnz7Eix8mlHa7J0xbxOZZuvwWURIHzK6KiGBKy71XHafKYvamZphv8ilAVfWU=) 2026-04-07 01:43:38.241488 | orchestrator | 2026-04-07 01:43:38.241498 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:38.241508 | orchestrator | Tuesday 07 April 2026 01:43:34 +0000 (0:00:01.145) 0:00:26.422 ********* 2026-04-07 01:43:38.241519 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMPptej3zsZEY+P1TBzpZoV3Qy9t4xTIwD+suTxXqt+o6R/+eumMChrMk/3DIP6PvKU7vtL73C3LlSKG8JUWghEqeqXTqDa2DnDA8KrRnT+gaHmN1n+dFSGqXQVQAcRIysNlMS/R4u4uMiRcCX5RLik/HafmFFqpmZcpCl2ACEyFjCR+0lVtLMZ/sE2onHRf4y18nAVxH0mCi8KBifSNBrdUTfYHkpw3IEIv2yEV8kq+Y5IrLol+aDbZT4OOEKwIej/n76yKJAK0ZOjHbtYvr4YkjlvLQJe9DtoaAD2zR4gMYEXdLpEuLNLbVboty36hrGAFweLhO9HtXkLv/xdFbq+4JyxKRYc5nyFX9K9nHJGNuPsjIhMhxOY6FyHXsK1t3pbWN2XrTfJGfR59vedOYWNa6imfG1TZ6sJHnhycehBfsepBrLgAxYN5H4guq86HtiB+09kfmyzEwP1WLTMLSjBaqsN8icWTgmtyZso8S+YNaj+luqkfc3MErGVF+arw0=) 2026-04-07 01:43:38.241530 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEdEfccpXI5eDxqZsix+iQEQ8qK5uCyk05qUvhyJ5QQ/7x+zWM6IdV+31Qn/MmyeuQz2csaciWIF5vEQ7Gd4cwE=) 2026-04-07 01:43:38.241540 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKCL/LMKquYsr+sU6CLtTcIbSATQGrr2XnYckAUKbCOt) 2026-04-07 01:43:38.241550 | orchestrator | 2026-04-07 01:43:38.241560 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-07 01:43:38.241570 | orchestrator | Tuesday 07 April 2026 01:43:35 +0000 (0:00:01.187) 0:00:27.609 ********* 2026-04-07 01:43:38.241707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLBvMJv+F8m3P9oooJRYOi5+cFUWD4nhEmk0YSOSkiasJnsLXKxCZJmJTYQcxETDmMwWyiefPTMats0H4mhfPX8KlOaMV5W7+AJpfxH8xSC8CYJC/83w0t02Zxqi7WwWy61GZG1S3L0gKvpDjqnxVpsdQ8JLBsTEXKvKYfBuUn2hYrUD40UA4SgKrfpsoTSTXOEIbKsgB/EjFy96XA46U7kITmHgRnZqS2yFdvTLL393eZX6woGvWM7IPSikQ7qT5srkrmR69qDSbxI9qZvJkXb+6O7X5y8vHE6JGDHaFilJdwqsZJutiw62vXg5NkxXzEebxfNMntmH99gy+eVEQ7VYOrzfTEnpiMdfpgNmSmpTRzCcJuvAFktT0BPxVxzBcB9m9MlJbcAs81Zj9Jq89EfWqnlyV1lX9Qovu92Q0Wl4m4vtgp660iaX6nH15CHPhMpg8OE3h0BhL6aDE/XT3s325DWW6HGR2luVfZuwnWkd+Z5+skQLJWJYVeNC0aKDE=) 2026-04-07 01:43:38.241727 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPNhHBkIo4j5VXlwQpHXmq8iXXCaqaMADPPx+hpnVg+3qAVDRLwDVGwD+98nWzkVEvVeGEmUt9Vlz57g/tF9Dhg=) 2026-04-07 01:43:38.241738 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMx8yg0atvLR1sXe94G9tNm5H/2T+gtnBrAtn6G8L6q) 2026-04-07 01:43:38.241748 | orchestrator | 2026-04-07 01:43:38.241758 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-07 01:43:38.241791 | orchestrator | Tuesday 07 April 2026 01:43:36 +0000 (0:00:01.185) 0:00:28.795 ********* 2026-04-07 01:43:38.241805 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-07 01:43:38.241818 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-07 01:43:38.241829 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-07 01:43:38.241840 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-07 01:43:38.241852 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-07 01:43:38.241863 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-07 01:43:38.241875 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-07 01:43:38.241886 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:43:38.241898 | orchestrator | 2026-04-07 01:43:38.241928 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-07 01:43:38.241940 | orchestrator | Tuesday 07 April 2026 01:43:37 +0000 (0:00:00.199) 0:00:28.995 ********* 2026-04-07 01:43:38.241951 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:43:38.241963 | orchestrator | 2026-04-07 01:43:38.241974 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-07 01:43:38.241991 | orchestrator | Tuesday 07 April 2026 01:43:37 +0000 (0:00:00.057) 0:00:29.053 ********* 2026-04-07 01:43:38.242081 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:43:38.242092 | orchestrator | 2026-04-07 01:43:38.242102 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-07 01:43:38.242112 | orchestrator | Tuesday 07 April 2026 01:43:37 +0000 (0:00:00.076) 0:00:29.130 ********* 2026-04-07 01:43:38.242122 | orchestrator | changed: [testbed-manager] 2026-04-07 01:43:38.242132 | orchestrator | 2026-04-07 01:43:38.242142 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:43:38.242153 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 01:43:38.242164 | orchestrator | 2026-04-07 01:43:38.242174 | orchestrator | 2026-04-07 01:43:38.242184 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:43:38.242193 | orchestrator | Tuesday 07 April 2026 01:43:37 +0000 (0:00:00.823) 0:00:29.953 ********* 2026-04-07 01:43:38.242203 | orchestrator | =============================================================================== 2026-04-07 01:43:38.242213 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.27s 2026-04-07 01:43:38.242223 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.55s 2026-04-07 01:43:38.242234 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-04-07 01:43:38.242244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2026-04-07 01:43:38.242254 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-07 01:43:38.242264 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-07 01:43:38.242274 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-07 01:43:38.242284 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-04-07 01:43:38.242294 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-07 01:43:38.242303 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-07 01:43:38.242313 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-07 01:43:38.242323 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-07 01:43:38.242333 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-07 01:43:38.242343 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-07 01:43:38.242361 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-07 01:43:38.242371 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-07 01:43:38.242381 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.82s 2026-04-07 01:43:38.242391 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-04-07 01:43:38.242402 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-04-07 01:43:38.242412 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2026-04-07 01:43:38.659266 | orchestrator | + osism apply squid 2026-04-07 01:43:50.970853 | orchestrator | 2026-04-07 01:43:50 | INFO  | Task 227d9a9d-586f-4826-9fca-d232ccb63ad1 (squid) was prepared for execution. 2026-04-07 01:43:50.970939 | orchestrator | 2026-04-07 01:43:50 | INFO  | It takes a moment until task 227d9a9d-586f-4826-9fca-d232ccb63ad1 (squid) has been started and output is visible here. 2026-04-07 01:45:54.825808 | orchestrator | 2026-04-07 01:45:54.825924 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-07 01:45:54.825939 | orchestrator | 2026-04-07 01:45:54.825949 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-07 01:45:54.825959 | orchestrator | Tuesday 07 April 2026 01:43:55 +0000 (0:00:00.176) 0:00:00.176 ********* 2026-04-07 01:45:54.825966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 01:45:54.825972 | orchestrator | 2026-04-07 01:45:54.825978 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-07 01:45:54.825984 | orchestrator | Tuesday 07 April 2026 01:43:55 +0000 (0:00:00.084) 0:00:00.260 ********* 2026-04-07 01:45:54.825989 | orchestrator | ok: [testbed-manager] 2026-04-07 01:45:54.825996 | orchestrator | 2026-04-07 01:45:54.826001 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-07 01:45:54.826007 | orchestrator | Tuesday 07 April 2026 01:43:57 +0000 (0:00:01.724) 0:00:01.984 ********* 2026-04-07 01:45:54.826013 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-07 01:45:54.826057 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-07 01:45:54.826066 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-07 01:45:54.826074 | orchestrator | 2026-04-07 01:45:54.826083 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-07 01:45:54.826092 | orchestrator | Tuesday 07 April 2026 01:43:58 +0000 (0:00:01.224) 0:00:03.209 ********* 2026-04-07 01:45:54.826101 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-07 01:45:54.826110 | orchestrator | 2026-04-07 01:45:54.826118 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-07 01:45:54.826127 | orchestrator | Tuesday 07 April 2026 01:43:59 +0000 (0:00:01.140) 0:00:04.350 ********* 2026-04-07 01:45:54.826133 | orchestrator | ok: [testbed-manager] 2026-04-07 01:45:54.826139 | orchestrator | 2026-04-07 01:45:54.826144 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-07 01:45:54.826150 | orchestrator | Tuesday 07 April 2026 01:44:00 +0000 (0:00:00.374) 0:00:04.724 ********* 2026-04-07 01:45:54.826156 | orchestrator | changed: [testbed-manager] 2026-04-07 01:45:54.826162 | orchestrator | 2026-04-07 01:45:54.826167 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-07 01:45:54.826173 | orchestrator | Tuesday 07 April 2026 01:44:01 +0000 (0:00:01.000) 0:00:05.724 ********* 2026-04-07 01:45:54.826178 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-07 01:45:54.826187 | orchestrator | ok: [testbed-manager] 2026-04-07 01:45:54.826193 | orchestrator | 2026-04-07 01:45:54.826198 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-07 01:45:54.826222 | orchestrator | Tuesday 07 April 2026 01:44:41 +0000 (0:00:40.464) 0:00:46.188 ********* 2026-04-07 01:45:54.826228 | orchestrator | changed: [testbed-manager] 2026-04-07 01:45:54.826233 | orchestrator | 2026-04-07 01:45:54.826239 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-07 01:45:54.826244 | orchestrator | Tuesday 07 April 2026 01:44:53 +0000 (0:00:12.169) 0:00:58.357 ********* 2026-04-07 01:45:54.826249 | orchestrator | Pausing for 60 seconds 2026-04-07 01:45:54.826255 | orchestrator | changed: [testbed-manager] 2026-04-07 01:45:54.826260 | orchestrator | 2026-04-07 01:45:54.826266 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-07 01:45:54.826271 | orchestrator | Tuesday 07 April 2026 01:45:53 +0000 (0:01:00.097) 0:01:58.454 ********* 2026-04-07 01:45:54.826276 | orchestrator | ok: [testbed-manager] 2026-04-07 01:45:54.826281 | orchestrator | 2026-04-07 01:45:54.826287 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-07 01:45:54.826292 | orchestrator | Tuesday 07 April 2026 01:45:53 +0000 (0:00:00.070) 0:01:58.525 ********* 2026-04-07 01:45:54.826297 | orchestrator | changed: [testbed-manager] 2026-04-07 01:45:54.826303 | orchestrator | 2026-04-07 01:45:54.826308 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:45:54.826313 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:45:54.826318 | orchestrator | 2026-04-07 01:45:54.826324 | orchestrator | 2026-04-07 01:45:54.826329 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:45:54.826334 | orchestrator | Tuesday 07 April 2026 01:45:54 +0000 (0:00:00.682) 0:01:59.208 ********* 2026-04-07 01:45:54.826340 | orchestrator | =============================================================================== 2026-04-07 01:45:54.826358 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-04-07 01:45:54.826363 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 40.46s 2026-04-07 01:45:54.826369 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.17s 2026-04-07 01:45:54.826375 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.72s 2026-04-07 01:45:54.826381 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-04-07 01:45:54.826387 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.14s 2026-04-07 01:45:54.826393 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.00s 2026-04-07 01:45:54.826399 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.68s 2026-04-07 01:45:54.826405 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-04-07 01:45:54.826411 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-07 01:45:54.826417 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-07 01:45:55.187155 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-07 01:45:55.187402 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-07 01:45:55.256960 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 01:45:55.257071 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-07 01:45:55.267265 | orchestrator | + set -e 2026-04-07 01:45:55.267370 | orchestrator | + NAMESPACE=kolla/release 2026-04-07 01:45:55.267386 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-07 01:45:55.272922 | orchestrator | ++ semver 9.5.0 9.0.0 2026-04-07 01:45:55.337935 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-07 01:45:55.338928 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-07 01:46:07.640806 | orchestrator | 2026-04-07 01:46:07 | INFO  | Task be5f6f88-b774-4479-a47c-a760ce933376 (operator) was prepared for execution. 2026-04-07 01:46:07.640920 | orchestrator | 2026-04-07 01:46:07 | INFO  | It takes a moment until task be5f6f88-b774-4479-a47c-a760ce933376 (operator) has been started and output is visible here. 2026-04-07 01:46:24.729522 | orchestrator | 2026-04-07 01:46:24.729680 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-07 01:46:24.729697 | orchestrator | 2026-04-07 01:46:24.729704 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 01:46:24.729712 | orchestrator | Tuesday 07 April 2026 01:46:12 +0000 (0:00:00.159) 0:00:00.159 ********* 2026-04-07 01:46:24.729719 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:46:24.729727 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:46:24.729734 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:46:24.729741 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:46:24.729748 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:46:24.729754 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:46:24.729761 | orchestrator | 2026-04-07 01:46:24.729768 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-07 01:46:24.729775 | orchestrator | Tuesday 07 April 2026 01:46:15 +0000 (0:00:03.554) 0:00:03.713 ********* 2026-04-07 01:46:24.729782 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:46:24.729789 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:46:24.729796 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:46:24.729810 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:46:24.729817 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:46:24.729824 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:46:24.729831 | orchestrator | 2026-04-07 01:46:24.729838 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-07 01:46:24.729845 | orchestrator | 2026-04-07 01:46:24.729851 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-07 01:46:24.729858 | orchestrator | Tuesday 07 April 2026 01:46:16 +0000 (0:00:00.930) 0:00:04.644 ********* 2026-04-07 01:46:24.729865 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:46:24.729872 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:46:24.729879 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:46:24.729886 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:46:24.729893 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:46:24.729900 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:46:24.729908 | orchestrator | 2026-04-07 01:46:24.729914 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-07 01:46:24.729921 | orchestrator | Tuesday 07 April 2026 01:46:16 +0000 (0:00:00.219) 0:00:04.863 ********* 2026-04-07 01:46:24.729928 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:46:24.729935 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:46:24.729941 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:46:24.729948 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:46:24.729955 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:46:24.729962 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:46:24.729969 | orchestrator | 2026-04-07 01:46:24.729976 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-07 01:46:24.729983 | orchestrator | Tuesday 07 April 2026 01:46:17 +0000 (0:00:00.205) 0:00:05.068 ********* 2026-04-07 01:46:24.729990 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:46:24.729997 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:46:24.730004 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:46:24.730011 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:46:24.730071 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:46:24.730079 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:46:24.730086 | orchestrator | 2026-04-07 01:46:24.730092 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-07 01:46:24.730099 | orchestrator | Tuesday 07 April 2026 01:46:17 +0000 (0:00:00.660) 0:00:05.729 ********* 2026-04-07 01:46:24.730106 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:46:24.730113 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:46:24.730120 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:46:24.730127 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:46:24.730134 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:46:24.730141 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:46:24.730165 | orchestrator | 2026-04-07 01:46:24.730172 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-07 01:46:24.730179 | orchestrator | Tuesday 07 April 2026 01:46:18 +0000 (0:00:00.794) 0:00:06.524 ********* 2026-04-07 01:46:24.730186 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-07 01:46:24.730194 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-07 01:46:24.730200 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-07 01:46:24.730207 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-07 01:46:24.730214 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-07 01:46:24.730220 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-07 01:46:24.730227 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-07 01:46:24.730233 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-07 01:46:24.730240 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-07 01:46:24.730246 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-07 01:46:24.730253 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-07 01:46:24.730260 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-07 01:46:24.730266 | orchestrator | 2026-04-07 01:46:24.730273 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-07 01:46:24.730280 | orchestrator | Tuesday 07 April 2026 01:46:19 +0000 (0:00:01.296) 0:00:07.820 ********* 2026-04-07 01:46:24.730287 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:46:24.730294 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:46:24.730300 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:46:24.730307 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:46:24.730313 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:46:24.730318 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:46:24.730324 | orchestrator | 2026-04-07 01:46:24.730330 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-07 01:46:24.730338 | orchestrator | Tuesday 07 April 2026 01:46:21 +0000 (0:00:01.260) 0:00:09.081 ********* 2026-04-07 01:46:24.730344 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-07 01:46:24.730351 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-07 01:46:24.730358 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-07 01:46:24.730365 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 01:46:24.730387 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 01:46:24.730394 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 01:46:24.730400 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 01:46:24.730407 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 01:46:24.730414 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-07 01:46:24.730421 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-07 01:46:24.730428 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-07 01:46:24.730434 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-07 01:46:24.730441 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-07 01:46:24.730447 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-07 01:46:24.730454 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-07 01:46:24.730461 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-07 01:46:24.730467 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-07 01:46:24.730474 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-07 01:46:24.730481 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-07 01:46:24.730488 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-07 01:46:24.730501 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-07 01:46:24.730508 | orchestrator | 2026-04-07 01:46:24.730514 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-07 01:46:24.730522 | orchestrator | Tuesday 07 April 2026 01:46:22 +0000 (0:00:01.328) 0:00:10.410 ********* 2026-04-07 01:46:24.730529 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:46:24.730535 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:46:24.730542 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:46:24.730549 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:46:24.730555 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:46:24.730562 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:46:24.730568 | orchestrator | 2026-04-07 01:46:24.730575 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-07 01:46:24.730582 | orchestrator | Tuesday 07 April 2026 01:46:22 +0000 (0:00:00.187) 0:00:10.597 ********* 2026-04-07 01:46:24.730589 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:46:24.730596 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:46:24.730602 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:46:24.730609 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:46:24.730634 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:46:24.730641 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:46:24.730647 | orchestrator | 2026-04-07 01:46:24.730653 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-07 01:46:24.730660 | orchestrator | Tuesday 07 April 2026 01:46:22 +0000 (0:00:00.186) 0:00:10.784 ********* 2026-04-07 01:46:24.730666 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:46:24.730673 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:46:24.730680 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:46:24.730687 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:46:24.730693 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:46:24.730700 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:46:24.730706 | orchestrator | 2026-04-07 01:46:24.730713 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-07 01:46:24.730720 | orchestrator | Tuesday 07 April 2026 01:46:23 +0000 (0:00:00.649) 0:00:11.433 ********* 2026-04-07 01:46:24.730727 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:46:24.730734 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:46:24.730741 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:46:24.730747 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:46:24.730762 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:46:24.730769 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:46:24.730776 | orchestrator | 2026-04-07 01:46:24.730782 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-07 01:46:24.730788 | orchestrator | Tuesday 07 April 2026 01:46:23 +0000 (0:00:00.222) 0:00:11.656 ********* 2026-04-07 01:46:24.730795 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 01:46:24.730802 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 01:46:24.730808 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-07 01:46:24.730816 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 01:46:24.730822 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:46:24.730829 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:46:24.730836 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:46:24.730843 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:46:24.730849 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-07 01:46:24.730856 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:46:24.730862 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 01:46:24.730869 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:46:24.730876 | orchestrator | 2026-04-07 01:46:24.730883 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-07 01:46:24.730890 | orchestrator | Tuesday 07 April 2026 01:46:24 +0000 (0:00:00.752) 0:00:12.409 ********* 2026-04-07 01:46:24.730902 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:46:24.730909 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:46:24.730915 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:46:24.730922 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:46:24.730929 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:46:24.730935 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:46:24.730942 | orchestrator | 2026-04-07 01:46:24.730949 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-07 01:46:24.730956 | orchestrator | Tuesday 07 April 2026 01:46:24 +0000 (0:00:00.169) 0:00:12.579 ********* 2026-04-07 01:46:24.730963 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:46:24.730970 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:46:24.730977 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:46:24.730983 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:46:24.730996 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:46:26.180476 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:46:26.180565 | orchestrator | 2026-04-07 01:46:26.180577 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-07 01:46:26.180589 | orchestrator | Tuesday 07 April 2026 01:46:24 +0000 (0:00:00.174) 0:00:12.753 ********* 2026-04-07 01:46:26.180599 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:46:26.180608 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:46:26.180673 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:46:26.180685 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:46:26.180694 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:46:26.180703 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:46:26.180712 | orchestrator | 2026-04-07 01:46:26.180721 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-07 01:46:26.180730 | orchestrator | Tuesday 07 April 2026 01:46:24 +0000 (0:00:00.205) 0:00:12.958 ********* 2026-04-07 01:46:26.180739 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:46:26.180747 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:46:26.180764 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:46:26.180774 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:46:26.180784 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:46:26.180792 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:46:26.180801 | orchestrator | 2026-04-07 01:46:26.180809 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-07 01:46:26.180818 | orchestrator | Tuesday 07 April 2026 01:46:25 +0000 (0:00:00.713) 0:00:13.672 ********* 2026-04-07 01:46:26.180827 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:46:26.180837 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:46:26.180846 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:46:26.180855 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:46:26.180864 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:46:26.180873 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:46:26.180882 | orchestrator | 2026-04-07 01:46:26.180890 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:46:26.180900 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:46:26.180910 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:46:26.180919 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:46:26.180928 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:46:26.180937 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:46:26.180966 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 01:46:26.180975 | orchestrator | 2026-04-07 01:46:26.180984 | orchestrator | 2026-04-07 01:46:26.180993 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:46:26.181002 | orchestrator | Tuesday 07 April 2026 01:46:25 +0000 (0:00:00.247) 0:00:13.920 ********* 2026-04-07 01:46:26.181010 | orchestrator | =============================================================================== 2026-04-07 01:46:26.181019 | orchestrator | Gathering Facts --------------------------------------------------------- 3.55s 2026-04-07 01:46:26.181029 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.33s 2026-04-07 01:46:26.181040 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.30s 2026-04-07 01:46:26.181050 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2026-04-07 01:46:26.181059 | orchestrator | Do not require tty for all users ---------------------------------------- 0.93s 2026-04-07 01:46:26.181067 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-04-07 01:46:26.181077 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2026-04-07 01:46:26.181086 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2026-04-07 01:46:26.181095 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.66s 2026-04-07 01:46:26.181105 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.65s 2026-04-07 01:46:26.181114 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-04-07 01:46:26.181124 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2026-04-07 01:46:26.181132 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.22s 2026-04-07 01:46:26.181140 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.21s 2026-04-07 01:46:26.181150 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.21s 2026-04-07 01:46:26.181160 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2026-04-07 01:46:26.181169 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-04-07 01:46:26.181178 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-04-07 01:46:26.181187 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-04-07 01:46:26.550831 | orchestrator | + osism apply --environment custom facts 2026-04-07 01:46:28.490005 | orchestrator | 2026-04-07 01:46:28 | INFO  | Trying to run play facts in environment custom 2026-04-07 01:46:38.644483 | orchestrator | 2026-04-07 01:46:38 | INFO  | Task 55df841e-da78-4c0c-bf69-e53908ad6aab (facts) was prepared for execution. 2026-04-07 01:46:38.644614 | orchestrator | 2026-04-07 01:46:38 | INFO  | It takes a moment until task 55df841e-da78-4c0c-bf69-e53908ad6aab (facts) has been started and output is visible here. 2026-04-07 01:47:21.734719 | orchestrator | 2026-04-07 01:47:21.734817 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-07 01:47:21.734829 | orchestrator | 2026-04-07 01:47:21.734838 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-07 01:47:21.734845 | orchestrator | Tuesday 07 April 2026 01:46:42 +0000 (0:00:00.088) 0:00:00.088 ********* 2026-04-07 01:47:21.734853 | orchestrator | ok: [testbed-manager] 2026-04-07 01:47:21.734862 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:21.734868 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:21.734874 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:47:21.734881 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:47:21.734887 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:47:21.734914 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:21.734921 | orchestrator | 2026-04-07 01:47:21.734928 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-07 01:47:21.734934 | orchestrator | Tuesday 07 April 2026 01:46:44 +0000 (0:00:01.332) 0:00:01.421 ********* 2026-04-07 01:47:21.734941 | orchestrator | ok: [testbed-manager] 2026-04-07 01:47:21.734948 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:47:21.734953 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:21.734959 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:21.734966 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:47:21.734973 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:47:21.734980 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:21.734986 | orchestrator | 2026-04-07 01:47:21.734992 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-07 01:47:21.734998 | orchestrator | 2026-04-07 01:47:21.735005 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-07 01:47:21.735012 | orchestrator | Tuesday 07 April 2026 01:46:45 +0000 (0:00:01.164) 0:00:02.585 ********* 2026-04-07 01:47:21.735019 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:21.735026 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:21.735032 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:21.735037 | orchestrator | 2026-04-07 01:47:21.735044 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-07 01:47:21.735052 | orchestrator | Tuesday 07 April 2026 01:46:45 +0000 (0:00:00.120) 0:00:02.706 ********* 2026-04-07 01:47:21.735059 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:21.735065 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:21.735072 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:21.735079 | orchestrator | 2026-04-07 01:47:21.735085 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-07 01:47:21.735091 | orchestrator | Tuesday 07 April 2026 01:46:45 +0000 (0:00:00.205) 0:00:02.912 ********* 2026-04-07 01:47:21.735097 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:21.735104 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:21.735111 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:21.735118 | orchestrator | 2026-04-07 01:47:21.735125 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-07 01:47:21.735131 | orchestrator | Tuesday 07 April 2026 01:46:46 +0000 (0:00:00.236) 0:00:03.149 ********* 2026-04-07 01:47:21.735139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:47:21.735148 | orchestrator | 2026-04-07 01:47:21.735155 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-07 01:47:21.735162 | orchestrator | Tuesday 07 April 2026 01:46:46 +0000 (0:00:00.140) 0:00:03.289 ********* 2026-04-07 01:47:21.735167 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:21.735174 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:21.735181 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:21.735188 | orchestrator | 2026-04-07 01:47:21.735194 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-07 01:47:21.735201 | orchestrator | Tuesday 07 April 2026 01:46:46 +0000 (0:00:00.395) 0:00:03.684 ********* 2026-04-07 01:47:21.735207 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:47:21.735214 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:47:21.735221 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:47:21.735228 | orchestrator | 2026-04-07 01:47:21.735235 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-07 01:47:21.735242 | orchestrator | Tuesday 07 April 2026 01:46:46 +0000 (0:00:00.162) 0:00:03.847 ********* 2026-04-07 01:47:21.735248 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:21.735254 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:21.735261 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:21.735268 | orchestrator | 2026-04-07 01:47:21.735275 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-07 01:47:21.735289 | orchestrator | Tuesday 07 April 2026 01:46:47 +0000 (0:00:00.974) 0:00:04.821 ********* 2026-04-07 01:47:21.735295 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:21.735301 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:21.735307 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:21.735313 | orchestrator | 2026-04-07 01:47:21.735319 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-07 01:47:21.735368 | orchestrator | Tuesday 07 April 2026 01:46:48 +0000 (0:00:00.446) 0:00:05.268 ********* 2026-04-07 01:47:21.735376 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:21.735383 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:21.735390 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:21.735397 | orchestrator | 2026-04-07 01:47:21.735403 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-07 01:47:21.735410 | orchestrator | Tuesday 07 April 2026 01:46:49 +0000 (0:00:00.966) 0:00:06.235 ********* 2026-04-07 01:47:21.735416 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:21.735422 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:21.735429 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:21.735436 | orchestrator | 2026-04-07 01:47:21.735443 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-07 01:47:21.735450 | orchestrator | Tuesday 07 April 2026 01:47:05 +0000 (0:00:16.223) 0:00:22.458 ********* 2026-04-07 01:47:21.735457 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:47:21.735464 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:47:21.735470 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:47:21.735476 | orchestrator | 2026-04-07 01:47:21.735482 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-07 01:47:21.735506 | orchestrator | Tuesday 07 April 2026 01:47:05 +0000 (0:00:00.130) 0:00:22.589 ********* 2026-04-07 01:47:21.735513 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:21.735520 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:21.735526 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:21.735533 | orchestrator | 2026-04-07 01:47:21.735543 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-07 01:47:21.735549 | orchestrator | Tuesday 07 April 2026 01:47:12 +0000 (0:00:07.265) 0:00:29.854 ********* 2026-04-07 01:47:21.735556 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:21.735563 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:21.735570 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:21.735576 | orchestrator | 2026-04-07 01:47:21.735583 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-07 01:47:21.735590 | orchestrator | Tuesday 07 April 2026 01:47:13 +0000 (0:00:00.489) 0:00:30.343 ********* 2026-04-07 01:47:21.735596 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-07 01:47:21.735603 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-07 01:47:21.735610 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-07 01:47:21.735617 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-07 01:47:21.735639 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-07 01:47:21.735645 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-07 01:47:21.735652 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-07 01:47:21.735658 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-07 01:47:21.735664 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-07 01:47:21.735670 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-07 01:47:21.735678 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-07 01:47:21.735685 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-07 01:47:21.735692 | orchestrator | 2026-04-07 01:47:21.735698 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-07 01:47:21.735712 | orchestrator | Tuesday 07 April 2026 01:47:16 +0000 (0:00:03.490) 0:00:33.834 ********* 2026-04-07 01:47:21.735719 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:21.735726 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:21.735733 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:21.735738 | orchestrator | 2026-04-07 01:47:21.735745 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 01:47:21.735752 | orchestrator | 2026-04-07 01:47:21.735760 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 01:47:21.735767 | orchestrator | Tuesday 07 April 2026 01:47:18 +0000 (0:00:01.343) 0:00:35.178 ********* 2026-04-07 01:47:21.735774 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:47:21.735779 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:47:21.735786 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:47:21.735793 | orchestrator | ok: [testbed-manager] 2026-04-07 01:47:21.735800 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:21.735806 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:21.735812 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:21.735819 | orchestrator | 2026-04-07 01:47:21.735825 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:47:21.735832 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:47:21.735839 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:47:21.735848 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:47:21.735854 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:47:21.735861 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:47:21.735868 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:47:21.735874 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:47:21.735881 | orchestrator | 2026-04-07 01:47:21.735888 | orchestrator | 2026-04-07 01:47:21.735895 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:47:21.735902 | orchestrator | Tuesday 07 April 2026 01:47:21 +0000 (0:00:03.647) 0:00:38.826 ********* 2026-04-07 01:47:21.735909 | orchestrator | =============================================================================== 2026-04-07 01:47:21.735916 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.22s 2026-04-07 01:47:21.735923 | orchestrator | Install required packages (Debian) -------------------------------------- 7.27s 2026-04-07 01:47:21.735930 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.65s 2026-04-07 01:47:21.735937 | orchestrator | Copy fact files --------------------------------------------------------- 3.49s 2026-04-07 01:47:21.735944 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.34s 2026-04-07 01:47:21.735951 | orchestrator | Create custom facts directory ------------------------------------------- 1.33s 2026-04-07 01:47:21.735963 | orchestrator | Copy fact file ---------------------------------------------------------- 1.16s 2026-04-07 01:47:22.005549 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2026-04-07 01:47:22.005684 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.97s 2026-04-07 01:47:22.005717 | orchestrator | Create custom facts directory ------------------------------------------- 0.49s 2026-04-07 01:47:22.005748 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-04-07 01:47:22.005757 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2026-04-07 01:47:22.005766 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-04-07 01:47:22.005775 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-04-07 01:47:22.005784 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-04-07 01:47:22.005793 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-04-07 01:47:22.005803 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2026-04-07 01:47:22.005812 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-04-07 01:47:22.355771 | orchestrator | + osism apply bootstrap 2026-04-07 01:47:34.594804 | orchestrator | 2026-04-07 01:47:34 | INFO  | Task 1bffb7ab-ce5e-410e-9c6a-fe560f5abf02 (bootstrap) was prepared for execution. 2026-04-07 01:47:34.594952 | orchestrator | 2026-04-07 01:47:34 | INFO  | It takes a moment until task 1bffb7ab-ce5e-410e-9c6a-fe560f5abf02 (bootstrap) has been started and output is visible here. 2026-04-07 01:47:52.378612 | orchestrator | 2026-04-07 01:47:52.378749 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-07 01:47:52.378763 | orchestrator | 2026-04-07 01:47:52.378773 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-07 01:47:52.378782 | orchestrator | Tuesday 07 April 2026 01:47:39 +0000 (0:00:00.159) 0:00:00.159 ********* 2026-04-07 01:47:52.378790 | orchestrator | ok: [testbed-manager] 2026-04-07 01:47:52.378800 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:52.378808 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:52.378817 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:52.378825 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:47:52.378833 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:47:52.378841 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:47:52.378850 | orchestrator | 2026-04-07 01:47:52.378859 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 01:47:52.378867 | orchestrator | 2026-04-07 01:47:52.378875 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 01:47:52.378884 | orchestrator | Tuesday 07 April 2026 01:47:39 +0000 (0:00:00.271) 0:00:00.430 ********* 2026-04-07 01:47:52.378892 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:47:52.378900 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:47:52.378909 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:47:52.378917 | orchestrator | ok: [testbed-manager] 2026-04-07 01:47:52.378925 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:52.378934 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:52.378942 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:52.378950 | orchestrator | 2026-04-07 01:47:52.378958 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-07 01:47:52.378967 | orchestrator | 2026-04-07 01:47:52.378975 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 01:47:52.378983 | orchestrator | Tuesday 07 April 2026 01:47:44 +0000 (0:00:04.625) 0:00:05.056 ********* 2026-04-07 01:47:52.378992 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-07 01:47:52.379001 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-07 01:47:52.379009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-07 01:47:52.379018 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-07 01:47:52.379026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 01:47:52.379034 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-07 01:47:52.379043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 01:47:52.379051 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-07 01:47:52.379059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 01:47:52.379086 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-07 01:47:52.379095 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-07 01:47:52.379103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 01:47:52.379112 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-07 01:47:52.379120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 01:47:52.379128 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-07 01:47:52.379137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 01:47:52.379145 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-07 01:47:52.379153 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-07 01:47:52.379162 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:47:52.379173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-07 01:47:52.379182 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-07 01:47:52.379192 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:47:52.379201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 01:47:52.379210 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-07 01:47:52.379219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-07 01:47:52.379228 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-07 01:47:52.379238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 01:47:52.379247 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-07 01:47:52.379256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-07 01:47:52.379266 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-07 01:47:52.379275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 01:47:52.379285 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-07 01:47:52.379294 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:47:52.379303 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-07 01:47:52.379312 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-07 01:47:52.379321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 01:47:52.379330 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-07 01:47:52.379339 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-07 01:47:52.379348 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-07 01:47:52.379357 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 01:47:52.379366 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-07 01:47:52.379375 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-07 01:47:52.379385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-07 01:47:52.379394 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-07 01:47:52.379403 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 01:47:52.379413 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-07 01:47:52.379434 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:47:52.379443 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-07 01:47:52.379451 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:47:52.379459 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-07 01:47:52.379483 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-07 01:47:52.379492 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:47:52.379500 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-07 01:47:52.379508 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-07 01:47:52.379523 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-07 01:47:52.379531 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:47:52.379539 | orchestrator | 2026-04-07 01:47:52.379548 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-07 01:47:52.379556 | orchestrator | 2026-04-07 01:47:52.379564 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-07 01:47:52.379573 | orchestrator | Tuesday 07 April 2026 01:47:44 +0000 (0:00:00.524) 0:00:05.581 ********* 2026-04-07 01:47:52.379581 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:52.379589 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:52.379597 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:52.379606 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:47:52.379614 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:47:52.379660 | orchestrator | ok: [testbed-manager] 2026-04-07 01:47:52.379669 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:47:52.379677 | orchestrator | 2026-04-07 01:47:52.379686 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-07 01:47:52.379694 | orchestrator | Tuesday 07 April 2026 01:47:45 +0000 (0:00:01.257) 0:00:06.838 ********* 2026-04-07 01:47:52.379703 | orchestrator | ok: [testbed-manager] 2026-04-07 01:47:52.379711 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:47:52.379719 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:47:52.379727 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:47:52.379735 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:47:52.379743 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:47:52.379751 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:47:52.379760 | orchestrator | 2026-04-07 01:47:52.379768 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-07 01:47:52.379776 | orchestrator | Tuesday 07 April 2026 01:47:47 +0000 (0:00:01.276) 0:00:08.115 ********* 2026-04-07 01:47:52.379785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:47:52.379796 | orchestrator | 2026-04-07 01:47:52.379805 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-07 01:47:52.379813 | orchestrator | Tuesday 07 April 2026 01:47:47 +0000 (0:00:00.296) 0:00:08.411 ********* 2026-04-07 01:47:52.379821 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:52.379830 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:52.379838 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:52.379846 | orchestrator | changed: [testbed-manager] 2026-04-07 01:47:52.379854 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:47:52.379862 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:47:52.379870 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:47:52.379879 | orchestrator | 2026-04-07 01:47:52.379887 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-07 01:47:52.379895 | orchestrator | Tuesday 07 April 2026 01:47:49 +0000 (0:00:02.206) 0:00:10.617 ********* 2026-04-07 01:47:52.379904 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:47:52.379913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:47:52.379923 | orchestrator | 2026-04-07 01:47:52.379932 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-07 01:47:52.379940 | orchestrator | Tuesday 07 April 2026 01:47:50 +0000 (0:00:00.310) 0:00:10.928 ********* 2026-04-07 01:47:52.379949 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:52.379957 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:47:52.379965 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:52.379973 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:47:52.379981 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:47:52.379990 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:52.380004 | orchestrator | 2026-04-07 01:47:52.380016 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-07 01:47:52.380025 | orchestrator | Tuesday 07 April 2026 01:47:51 +0000 (0:00:01.063) 0:00:11.991 ********* 2026-04-07 01:47:52.380033 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:47:52.380041 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:47:52.380050 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:47:52.380058 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:47:52.380066 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:47:52.380074 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:47:52.380082 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:47:52.380090 | orchestrator | 2026-04-07 01:47:52.380099 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-07 01:47:52.380107 | orchestrator | Tuesday 07 April 2026 01:47:51 +0000 (0:00:00.643) 0:00:12.634 ********* 2026-04-07 01:47:52.380115 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:47:52.380123 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:47:52.380131 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:47:52.380140 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:47:52.380148 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:47:52.380156 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:47:52.380164 | orchestrator | ok: [testbed-manager] 2026-04-07 01:47:52.380172 | orchestrator | 2026-04-07 01:47:52.380181 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-07 01:47:52.380190 | orchestrator | Tuesday 07 April 2026 01:47:52 +0000 (0:00:00.475) 0:00:13.110 ********* 2026-04-07 01:47:52.380198 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:47:52.380206 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:47:52.380227 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:48:05.021065 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:48:05.021155 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:48:05.021163 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:48:05.021167 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:48:05.021172 | orchestrator | 2026-04-07 01:48:05.021177 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-07 01:48:05.021183 | orchestrator | Tuesday 07 April 2026 01:47:52 +0000 (0:00:00.264) 0:00:13.375 ********* 2026-04-07 01:48:05.021190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:48:05.021206 | orchestrator | 2026-04-07 01:48:05.021210 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-07 01:48:05.021216 | orchestrator | Tuesday 07 April 2026 01:47:52 +0000 (0:00:00.335) 0:00:13.710 ********* 2026-04-07 01:48:05.021220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:48:05.021224 | orchestrator | 2026-04-07 01:48:05.021229 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-07 01:48:05.021233 | orchestrator | Tuesday 07 April 2026 01:47:53 +0000 (0:00:00.348) 0:00:14.059 ********* 2026-04-07 01:48:05.021237 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.021242 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.021246 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:05.021250 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.021254 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:05.021258 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:05.021262 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021266 | orchestrator | 2026-04-07 01:48:05.021270 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-07 01:48:05.021274 | orchestrator | Tuesday 07 April 2026 01:47:54 +0000 (0:00:01.387) 0:00:15.446 ********* 2026-04-07 01:48:05.021295 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:48:05.021299 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:48:05.021303 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:48:05.021307 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:48:05.021311 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:48:05.021315 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:48:05.021319 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:48:05.021323 | orchestrator | 2026-04-07 01:48:05.021326 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-07 01:48:05.021330 | orchestrator | Tuesday 07 April 2026 01:47:54 +0000 (0:00:00.262) 0:00:15.709 ********* 2026-04-07 01:48:05.021334 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021338 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.021342 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.021346 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.021350 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:05.021354 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:05.021358 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:05.021362 | orchestrator | 2026-04-07 01:48:05.021366 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-07 01:48:05.021369 | orchestrator | Tuesday 07 April 2026 01:47:55 +0000 (0:00:00.567) 0:00:16.277 ********* 2026-04-07 01:48:05.021373 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:48:05.021377 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:48:05.021381 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:48:05.021385 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:48:05.021389 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:48:05.021393 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:48:05.021397 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:48:05.021401 | orchestrator | 2026-04-07 01:48:05.021405 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-07 01:48:05.021410 | orchestrator | Tuesday 07 April 2026 01:47:55 +0000 (0:00:00.365) 0:00:16.642 ********* 2026-04-07 01:48:05.021414 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021418 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:48:05.021422 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:48:05.021426 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:48:05.021430 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:05.021434 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:05.021442 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:05.021446 | orchestrator | 2026-04-07 01:48:05.021450 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-07 01:48:05.021454 | orchestrator | Tuesday 07 April 2026 01:47:56 +0000 (0:00:00.520) 0:00:17.162 ********* 2026-04-07 01:48:05.021458 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021462 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:48:05.021466 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:48:05.021470 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:48:05.021474 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:05.021478 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:05.021482 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:05.021489 | orchestrator | 2026-04-07 01:48:05.021495 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-07 01:48:05.021501 | orchestrator | Tuesday 07 April 2026 01:47:57 +0000 (0:00:01.158) 0:00:18.321 ********* 2026-04-07 01:48:05.021507 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021513 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.021519 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.021525 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:05.021532 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:05.021538 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:05.021545 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.021551 | orchestrator | 2026-04-07 01:48:05.021555 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-07 01:48:05.021564 | orchestrator | Tuesday 07 April 2026 01:47:58 +0000 (0:00:01.069) 0:00:19.391 ********* 2026-04-07 01:48:05.021586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:48:05.021593 | orchestrator | 2026-04-07 01:48:05.021600 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-07 01:48:05.021606 | orchestrator | Tuesday 07 April 2026 01:47:58 +0000 (0:00:00.357) 0:00:19.749 ********* 2026-04-07 01:48:05.021613 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:48:05.021619 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:48:05.021644 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:05.021650 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:48:05.021654 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:48:05.021658 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:05.021662 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:05.021666 | orchestrator | 2026-04-07 01:48:05.021670 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-07 01:48:05.021674 | orchestrator | Tuesday 07 April 2026 01:48:00 +0000 (0:00:01.329) 0:00:21.079 ********* 2026-04-07 01:48:05.021678 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021682 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.021686 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.021690 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.021693 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:05.021697 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:05.021701 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:05.021705 | orchestrator | 2026-04-07 01:48:05.021709 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-07 01:48:05.021713 | orchestrator | Tuesday 07 April 2026 01:48:00 +0000 (0:00:00.261) 0:00:21.341 ********* 2026-04-07 01:48:05.021717 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021721 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.021725 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.021729 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.021733 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:05.021737 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:05.021741 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:05.021746 | orchestrator | 2026-04-07 01:48:05.021753 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-07 01:48:05.021759 | orchestrator | Tuesday 07 April 2026 01:48:00 +0000 (0:00:00.301) 0:00:21.642 ********* 2026-04-07 01:48:05.021765 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021771 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.021777 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.021782 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.021788 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:05.021794 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:05.021800 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:05.021807 | orchestrator | 2026-04-07 01:48:05.021813 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-07 01:48:05.021818 | orchestrator | Tuesday 07 April 2026 01:48:01 +0000 (0:00:00.268) 0:00:21.911 ********* 2026-04-07 01:48:05.021825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:48:05.021833 | orchestrator | 2026-04-07 01:48:05.021839 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-07 01:48:05.021845 | orchestrator | Tuesday 07 April 2026 01:48:01 +0000 (0:00:00.360) 0:00:22.271 ********* 2026-04-07 01:48:05.021851 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021858 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.021871 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.021877 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.021884 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:05.021890 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:05.021897 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:05.021903 | orchestrator | 2026-04-07 01:48:05.021910 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-07 01:48:05.021915 | orchestrator | Tuesday 07 April 2026 01:48:01 +0000 (0:00:00.545) 0:00:22.817 ********* 2026-04-07 01:48:05.021921 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:48:05.021928 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:48:05.021934 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:48:05.021943 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:48:05.021950 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:48:05.021956 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:48:05.021962 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:48:05.021968 | orchestrator | 2026-04-07 01:48:05.021974 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-07 01:48:05.021980 | orchestrator | Tuesday 07 April 2026 01:48:02 +0000 (0:00:00.254) 0:00:23.071 ********* 2026-04-07 01:48:05.021985 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.021991 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.021997 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.022002 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.022008 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:05.022060 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:05.022068 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:05.022072 | orchestrator | 2026-04-07 01:48:05.022077 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-07 01:48:05.022081 | orchestrator | Tuesday 07 April 2026 01:48:03 +0000 (0:00:01.095) 0:00:24.167 ********* 2026-04-07 01:48:05.022085 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.022089 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.022093 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.022097 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.022101 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:05.022105 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:05.022116 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:05.022120 | orchestrator | 2026-04-07 01:48:05.022124 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-07 01:48:05.022128 | orchestrator | Tuesday 07 April 2026 01:48:03 +0000 (0:00:00.594) 0:00:24.762 ********* 2026-04-07 01:48:05.022132 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:05.022136 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:05.022140 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:05.022144 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:05.022155 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:47.472244 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:47.472356 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:47.472374 | orchestrator | 2026-04-07 01:48:47.472387 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-07 01:48:47.472401 | orchestrator | Tuesday 07 April 2026 01:48:05 +0000 (0:00:01.141) 0:00:25.903 ********* 2026-04-07 01:48:47.472412 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.472425 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.472436 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.472448 | orchestrator | changed: [testbed-manager] 2026-04-07 01:48:47.472460 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:47.472471 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:47.472483 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:47.472494 | orchestrator | 2026-04-07 01:48:47.472506 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-07 01:48:47.472518 | orchestrator | Tuesday 07 April 2026 01:48:21 +0000 (0:00:16.423) 0:00:42.327 ********* 2026-04-07 01:48:47.472529 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.472563 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.472575 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.472587 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.472598 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.472610 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.472621 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.472709 | orchestrator | 2026-04-07 01:48:47.472729 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-07 01:48:47.472749 | orchestrator | Tuesday 07 April 2026 01:48:21 +0000 (0:00:00.222) 0:00:42.549 ********* 2026-04-07 01:48:47.472763 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.472775 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.472786 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.472797 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.472808 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.472819 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.472831 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.472842 | orchestrator | 2026-04-07 01:48:47.472853 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-07 01:48:47.472865 | orchestrator | Tuesday 07 April 2026 01:48:21 +0000 (0:00:00.233) 0:00:42.783 ********* 2026-04-07 01:48:47.472876 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.472887 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.472898 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.472909 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.472920 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.472932 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.472943 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.472955 | orchestrator | 2026-04-07 01:48:47.472966 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-07 01:48:47.472978 | orchestrator | Tuesday 07 April 2026 01:48:22 +0000 (0:00:00.287) 0:00:43.071 ********* 2026-04-07 01:48:47.472991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:48:47.473005 | orchestrator | 2026-04-07 01:48:47.473016 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-07 01:48:47.473028 | orchestrator | Tuesday 07 April 2026 01:48:22 +0000 (0:00:00.339) 0:00:43.411 ********* 2026-04-07 01:48:47.473039 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.473050 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.473062 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.473073 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.473084 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.473095 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.473106 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.473117 | orchestrator | 2026-04-07 01:48:47.473129 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-07 01:48:47.473140 | orchestrator | Tuesday 07 April 2026 01:48:24 +0000 (0:00:01.665) 0:00:45.076 ********* 2026-04-07 01:48:47.473152 | orchestrator | changed: [testbed-manager] 2026-04-07 01:48:47.473163 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:48:47.473174 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:48:47.473186 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:48:47.473197 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:47.473208 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:47.473219 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:47.473230 | orchestrator | 2026-04-07 01:48:47.473241 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-07 01:48:47.473269 | orchestrator | Tuesday 07 April 2026 01:48:25 +0000 (0:00:01.099) 0:00:46.176 ********* 2026-04-07 01:48:47.473280 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.473292 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.473303 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.473324 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.473335 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.473347 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.473358 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.473371 | orchestrator | 2026-04-07 01:48:47.473390 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-07 01:48:47.473408 | orchestrator | Tuesday 07 April 2026 01:48:26 +0000 (0:00:00.854) 0:00:47.031 ********* 2026-04-07 01:48:47.473427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:48:47.473447 | orchestrator | 2026-04-07 01:48:47.473466 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-07 01:48:47.473484 | orchestrator | Tuesday 07 April 2026 01:48:26 +0000 (0:00:00.312) 0:00:47.343 ********* 2026-04-07 01:48:47.473502 | orchestrator | changed: [testbed-manager] 2026-04-07 01:48:47.473519 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:48:47.473537 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:48:47.473553 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:48:47.473571 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:47.473587 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:47.473605 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:47.473622 | orchestrator | 2026-04-07 01:48:47.473689 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-07 01:48:47.473708 | orchestrator | Tuesday 07 April 2026 01:48:27 +0000 (0:00:01.028) 0:00:48.371 ********* 2026-04-07 01:48:47.473725 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:48:47.473742 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:48:47.473759 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:48:47.473775 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:48:47.473792 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:48:47.473809 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:48:47.473826 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:48:47.473843 | orchestrator | 2026-04-07 01:48:47.473859 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-07 01:48:47.473876 | orchestrator | Tuesday 07 April 2026 01:48:27 +0000 (0:00:00.227) 0:00:48.599 ********* 2026-04-07 01:48:47.473894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:48:47.473910 | orchestrator | 2026-04-07 01:48:47.473927 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-07 01:48:47.473944 | orchestrator | Tuesday 07 April 2026 01:48:28 +0000 (0:00:00.370) 0:00:48.970 ********* 2026-04-07 01:48:47.473961 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.473978 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.473995 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.474012 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.474103 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.474121 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.474137 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.474154 | orchestrator | 2026-04-07 01:48:47.474170 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-07 01:48:47.474188 | orchestrator | Tuesday 07 April 2026 01:48:29 +0000 (0:00:01.693) 0:00:50.664 ********* 2026-04-07 01:48:47.474205 | orchestrator | changed: [testbed-manager] 2026-04-07 01:48:47.474222 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:48:47.474239 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:48:47.474256 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:47.474273 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:48:47.474290 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:47.474308 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:47.474339 | orchestrator | 2026-04-07 01:48:47.474356 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-07 01:48:47.474373 | orchestrator | Tuesday 07 April 2026 01:48:30 +0000 (0:00:01.145) 0:00:51.809 ********* 2026-04-07 01:48:47.474391 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:48:47.474408 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:48:47.474424 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:48:47.474442 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:48:47.474459 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:48:47.474475 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:48:47.474492 | orchestrator | changed: [testbed-manager] 2026-04-07 01:48:47.474510 | orchestrator | 2026-04-07 01:48:47.474527 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-07 01:48:47.474544 | orchestrator | Tuesday 07 April 2026 01:48:44 +0000 (0:00:13.748) 0:01:05.557 ********* 2026-04-07 01:48:47.474561 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.474578 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.474595 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.474612 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.474652 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.474672 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.474692 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.474711 | orchestrator | 2026-04-07 01:48:47.474730 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-07 01:48:47.474747 | orchestrator | Tuesday 07 April 2026 01:48:45 +0000 (0:00:01.015) 0:01:06.573 ********* 2026-04-07 01:48:47.474767 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.474786 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.474804 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.474823 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.474842 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.474860 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.474878 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.474897 | orchestrator | 2026-04-07 01:48:47.474916 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-07 01:48:47.474934 | orchestrator | Tuesday 07 April 2026 01:48:46 +0000 (0:00:00.951) 0:01:07.525 ********* 2026-04-07 01:48:47.474956 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.474968 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.474979 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.474990 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.475000 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.475011 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.475022 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.475033 | orchestrator | 2026-04-07 01:48:47.475044 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-07 01:48:47.475056 | orchestrator | Tuesday 07 April 2026 01:48:46 +0000 (0:00:00.242) 0:01:07.768 ********* 2026-04-07 01:48:47.475067 | orchestrator | ok: [testbed-manager] 2026-04-07 01:48:47.475078 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:48:47.475089 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:48:47.475100 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:48:47.475111 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:48:47.475122 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:48:47.475133 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:48:47.475144 | orchestrator | 2026-04-07 01:48:47.475155 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-07 01:48:47.475166 | orchestrator | Tuesday 07 April 2026 01:48:47 +0000 (0:00:00.284) 0:01:08.052 ********* 2026-04-07 01:48:47.475178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:48:47.475190 | orchestrator | 2026-04-07 01:48:47.475214 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-07 01:51:11.996742 | orchestrator | Tuesday 07 April 2026 01:48:47 +0000 (0:00:00.307) 0:01:08.359 ********* 2026-04-07 01:51:11.996838 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:11.996851 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:11.996859 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:11.996867 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:11.996875 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:11.996882 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:11.996890 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:11.996897 | orchestrator | 2026-04-07 01:51:11.996905 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-07 01:51:11.996913 | orchestrator | Tuesday 07 April 2026 01:48:49 +0000 (0:00:01.716) 0:01:10.075 ********* 2026-04-07 01:51:11.996921 | orchestrator | changed: [testbed-manager] 2026-04-07 01:51:11.996929 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:51:11.996937 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:51:11.996945 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:51:11.996952 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:51:11.996960 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:51:11.996967 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:51:11.996975 | orchestrator | 2026-04-07 01:51:11.996983 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-07 01:51:11.996991 | orchestrator | Tuesday 07 April 2026 01:48:49 +0000 (0:00:00.576) 0:01:10.652 ********* 2026-04-07 01:51:11.996999 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:11.997006 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:11.997014 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:11.997021 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:11.997029 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:11.997036 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:11.997044 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:11.997051 | orchestrator | 2026-04-07 01:51:11.997060 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-07 01:51:11.997067 | orchestrator | Tuesday 07 April 2026 01:48:50 +0000 (0:00:00.263) 0:01:10.915 ********* 2026-04-07 01:51:11.997075 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:11.997083 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:11.997090 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:11.997097 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:11.997105 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:11.997113 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:11.997120 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:11.997128 | orchestrator | 2026-04-07 01:51:11.997135 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-07 01:51:11.997150 | orchestrator | Tuesday 07 April 2026 01:48:51 +0000 (0:00:01.250) 0:01:12.165 ********* 2026-04-07 01:51:11.997158 | orchestrator | changed: [testbed-manager] 2026-04-07 01:51:11.997166 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:51:11.997173 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:51:11.997181 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:51:11.997189 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:51:11.997196 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:51:11.997204 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:51:11.997211 | orchestrator | 2026-04-07 01:51:11.997223 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-07 01:51:11.997231 | orchestrator | Tuesday 07 April 2026 01:48:53 +0000 (0:00:01.801) 0:01:13.967 ********* 2026-04-07 01:51:11.997240 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:11.997249 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:11.997257 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:11.997266 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:11.997274 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:11.997283 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:11.997291 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:11.997299 | orchestrator | 2026-04-07 01:51:11.997308 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-07 01:51:11.997336 | orchestrator | Tuesday 07 April 2026 01:48:55 +0000 (0:00:02.466) 0:01:16.433 ********* 2026-04-07 01:51:11.997346 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:11.997354 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:11.997362 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:11.997371 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:11.997379 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:11.997387 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:11.997396 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:11.997404 | orchestrator | 2026-04-07 01:51:11.997413 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-07 01:51:11.997421 | orchestrator | Tuesday 07 April 2026 01:49:36 +0000 (0:00:41.431) 0:01:57.865 ********* 2026-04-07 01:51:11.997429 | orchestrator | changed: [testbed-manager] 2026-04-07 01:51:11.997438 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:51:11.997447 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:51:11.997455 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:51:11.997463 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:51:11.997472 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:51:11.997480 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:51:11.997489 | orchestrator | 2026-04-07 01:51:11.997497 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-07 01:51:11.997506 | orchestrator | Tuesday 07 April 2026 01:50:54 +0000 (0:01:17.778) 0:03:15.643 ********* 2026-04-07 01:51:11.997515 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:11.997528 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:11.997540 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:11.997553 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:11.997565 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:11.997576 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:11.997588 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:11.997601 | orchestrator | 2026-04-07 01:51:11.997614 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-07 01:51:11.997627 | orchestrator | Tuesday 07 April 2026 01:50:56 +0000 (0:00:01.798) 0:03:17.442 ********* 2026-04-07 01:51:11.997659 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:11.997670 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:11.997681 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:11.997692 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:11.997703 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:11.997714 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:11.997726 | orchestrator | changed: [testbed-manager] 2026-04-07 01:51:11.997736 | orchestrator | 2026-04-07 01:51:11.997748 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-07 01:51:11.997759 | orchestrator | Tuesday 07 April 2026 01:51:10 +0000 (0:00:14.088) 0:03:31.530 ********* 2026-04-07 01:51:11.997803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-07 01:51:11.997843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-07 01:51:11.997866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-07 01:51:11.997875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-07 01:51:11.997883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-07 01:51:11.997891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-07 01:51:11.997899 | orchestrator | 2026-04-07 01:51:11.997906 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-07 01:51:11.997914 | orchestrator | Tuesday 07 April 2026 01:51:11 +0000 (0:00:00.497) 0:03:32.028 ********* 2026-04-07 01:51:11.997922 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-07 01:51:11.997930 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-07 01:51:11.997938 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:51:11.997945 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:51:11.997953 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-07 01:51:11.997964 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-07 01:51:11.997972 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:51:11.997979 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:51:11.997987 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 01:51:11.997995 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 01:51:11.998002 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 01:51:11.998009 | orchestrator | 2026-04-07 01:51:11.998069 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-07 01:51:11.998077 | orchestrator | Tuesday 07 April 2026 01:51:11 +0000 (0:00:00.752) 0:03:32.780 ********* 2026-04-07 01:51:11.998085 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-07 01:51:11.998094 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-07 01:51:11.998102 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-07 01:51:11.998109 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-07 01:51:11.998117 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-07 01:51:11.998132 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-07 01:51:17.791784 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-07 01:51:17.791882 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-07 01:51:17.791915 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-07 01:51:17.791924 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-07 01:51:17.791932 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-07 01:51:17.791939 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-07 01:51:17.791946 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:51:17.791955 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-07 01:51:17.791962 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-07 01:51:17.791970 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-07 01:51:17.791977 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-07 01:51:17.791984 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-07 01:51:17.791991 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-07 01:51:17.791998 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-07 01:51:17.792004 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-07 01:51:17.792011 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-07 01:51:17.792018 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-07 01:51:17.792025 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-07 01:51:17.792032 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:51:17.792039 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-07 01:51:17.792046 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-07 01:51:17.792053 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-07 01:51:17.792059 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-07 01:51:17.792066 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-07 01:51:17.792073 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-07 01:51:17.792080 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-07 01:51:17.792086 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-07 01:51:17.792094 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-07 01:51:17.792101 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-07 01:51:17.792108 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-07 01:51:17.792127 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-07 01:51:17.792135 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-07 01:51:17.792142 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-07 01:51:17.792149 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:51:17.792156 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-07 01:51:17.792163 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-07 01:51:17.792176 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-07 01:51:17.792185 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:51:17.792192 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-07 01:51:17.792199 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-07 01:51:17.792206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-07 01:51:17.792213 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-07 01:51:17.792219 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-07 01:51:17.792241 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-07 01:51:17.792250 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-07 01:51:17.792257 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-07 01:51:17.792265 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-07 01:51:17.792272 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-07 01:51:17.792280 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-07 01:51:17.792287 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-07 01:51:17.792295 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-07 01:51:17.792302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-07 01:51:17.792310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-07 01:51:17.792317 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-07 01:51:17.792325 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-07 01:51:17.792332 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-07 01:51:17.792340 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-07 01:51:17.792347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-07 01:51:17.792355 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-07 01:51:17.792362 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-07 01:51:17.792369 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-07 01:51:17.792376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-07 01:51:17.792384 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-07 01:51:17.792392 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-07 01:51:17.792399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-07 01:51:17.792407 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-07 01:51:17.792415 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-07 01:51:17.792422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-07 01:51:17.792437 | orchestrator | 2026-04-07 01:51:17.792445 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-07 01:51:17.792453 | orchestrator | Tuesday 07 April 2026 01:51:16 +0000 (0:00:04.845) 0:03:37.626 ********* 2026-04-07 01:51:17.792461 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 01:51:17.792468 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 01:51:17.792475 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 01:51:17.792483 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 01:51:17.792495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 01:51:17.792503 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 01:51:17.792510 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-07 01:51:17.792517 | orchestrator | 2026-04-07 01:51:17.792525 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-07 01:51:17.792533 | orchestrator | Tuesday 07 April 2026 01:51:17 +0000 (0:00:00.565) 0:03:38.192 ********* 2026-04-07 01:51:17.792541 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 01:51:17.792547 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:51:17.792554 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 01:51:17.792561 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 01:51:17.792569 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:51:17.792576 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:51:17.792583 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 01:51:17.792590 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:51:17.792597 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 01:51:17.792604 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 01:51:17.792617 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 01:51:31.283326 | orchestrator | 2026-04-07 01:51:31.283471 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-07 01:51:31.283501 | orchestrator | Tuesday 07 April 2026 01:51:17 +0000 (0:00:00.485) 0:03:38.677 ********* 2026-04-07 01:51:31.283521 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 01:51:31.283541 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:51:31.283562 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 01:51:31.283581 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 01:51:31.283600 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:51:31.283618 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-07 01:51:31.283669 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:51:31.283691 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:51:31.283708 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 01:51:31.283727 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 01:51:31.283744 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-07 01:51:31.283762 | orchestrator | 2026-04-07 01:51:31.283781 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-07 01:51:31.283831 | orchestrator | Tuesday 07 April 2026 01:51:18 +0000 (0:00:00.568) 0:03:39.246 ********* 2026-04-07 01:51:31.283853 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-07 01:51:31.283872 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:51:31.283891 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-07 01:51:31.283910 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:51:31.283928 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-07 01:51:31.283945 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:51:31.283963 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-07 01:51:31.283981 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:51:31.284002 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-07 01:51:31.284023 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-07 01:51:31.284044 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-07 01:51:31.284064 | orchestrator | 2026-04-07 01:51:31.284082 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-07 01:51:31.284101 | orchestrator | Tuesday 07 April 2026 01:51:18 +0000 (0:00:00.552) 0:03:39.798 ********* 2026-04-07 01:51:31.284122 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:51:31.284144 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:51:31.284163 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:51:31.284182 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:51:31.284200 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:51:31.284218 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:51:31.284237 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:51:31.284256 | orchestrator | 2026-04-07 01:51:31.284274 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-07 01:51:31.284291 | orchestrator | Tuesday 07 April 2026 01:51:19 +0000 (0:00:00.335) 0:03:40.134 ********* 2026-04-07 01:51:31.284310 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:31.284346 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:31.284367 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:31.284387 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:31.284406 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:31.284426 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:31.284446 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:31.284465 | orchestrator | 2026-04-07 01:51:31.284484 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-07 01:51:31.284504 | orchestrator | Tuesday 07 April 2026 01:51:25 +0000 (0:00:05.812) 0:03:45.947 ********* 2026-04-07 01:51:31.284524 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-07 01:51:31.284542 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-07 01:51:31.284561 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:51:31.284581 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-07 01:51:31.284601 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:51:31.284620 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:51:31.284680 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-07 01:51:31.284698 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-07 01:51:31.284716 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:51:31.284770 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:51:31.284817 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-07 01:51:31.284838 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:51:31.284857 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-07 01:51:31.284876 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:51:31.284895 | orchestrator | 2026-04-07 01:51:31.284930 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-07 01:51:31.284949 | orchestrator | Tuesday 07 April 2026 01:51:25 +0000 (0:00:00.327) 0:03:46.274 ********* 2026-04-07 01:51:31.284968 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-07 01:51:31.284986 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-07 01:51:31.285005 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-07 01:51:31.285055 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-07 01:51:31.285077 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-07 01:51:31.285097 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-07 01:51:31.285115 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-07 01:51:31.285135 | orchestrator | 2026-04-07 01:51:31.285183 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-07 01:51:31.285204 | orchestrator | Tuesday 07 April 2026 01:51:26 +0000 (0:00:01.071) 0:03:47.346 ********* 2026-04-07 01:51:31.285227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:51:31.285249 | orchestrator | 2026-04-07 01:51:31.285269 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-07 01:51:31.285286 | orchestrator | Tuesday 07 April 2026 01:51:27 +0000 (0:00:00.591) 0:03:47.938 ********* 2026-04-07 01:51:31.285307 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:31.285326 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:31.285345 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:31.285363 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:31.285381 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:31.285399 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:31.285418 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:31.285436 | orchestrator | 2026-04-07 01:51:31.285456 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-07 01:51:31.285474 | orchestrator | Tuesday 07 April 2026 01:51:28 +0000 (0:00:01.292) 0:03:49.230 ********* 2026-04-07 01:51:31.285493 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:31.285513 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:31.285532 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:31.285551 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:31.285570 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:31.285588 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:31.285607 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:31.285626 | orchestrator | 2026-04-07 01:51:31.285718 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-07 01:51:31.285740 | orchestrator | Tuesday 07 April 2026 01:51:28 +0000 (0:00:00.624) 0:03:49.854 ********* 2026-04-07 01:51:31.285760 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:51:31.285781 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:51:31.285800 | orchestrator | changed: [testbed-manager] 2026-04-07 01:51:31.285819 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:51:31.285837 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:51:31.285856 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:51:31.285874 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:51:31.285894 | orchestrator | 2026-04-07 01:51:31.285914 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-07 01:51:31.285935 | orchestrator | Tuesday 07 April 2026 01:51:29 +0000 (0:00:00.619) 0:03:50.474 ********* 2026-04-07 01:51:31.285956 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:31.285976 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:31.285995 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:31.286104 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:31.286132 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:31.286154 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:31.286175 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:31.286197 | orchestrator | 2026-04-07 01:51:31.286219 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-07 01:51:31.286258 | orchestrator | Tuesday 07 April 2026 01:51:30 +0000 (0:00:00.710) 0:03:51.184 ********* 2026-04-07 01:51:31.286299 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775525161.851495, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:31.286326 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775525133.7819147, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:31.286349 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775525146.6194837, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:31.286414 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775525152.1035533, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418445 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775525158.0594585, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418543 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775525166.9634345, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418556 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775525157.4356422, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418590 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418613 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418624 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418633 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418714 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418725 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418735 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 01:51:36.418752 | orchestrator | 2026-04-07 01:51:36.418763 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-07 01:51:36.418773 | orchestrator | Tuesday 07 April 2026 01:51:31 +0000 (0:00:00.984) 0:03:52.169 ********* 2026-04-07 01:51:36.418782 | orchestrator | changed: [testbed-manager] 2026-04-07 01:51:36.418793 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:51:36.418802 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:51:36.418811 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:51:36.418821 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:51:36.418830 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:51:36.418839 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:51:36.418848 | orchestrator | 2026-04-07 01:51:36.418858 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-07 01:51:36.418867 | orchestrator | Tuesday 07 April 2026 01:51:32 +0000 (0:00:01.126) 0:03:53.296 ********* 2026-04-07 01:51:36.418876 | orchestrator | changed: [testbed-manager] 2026-04-07 01:51:36.418886 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:51:36.418894 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:51:36.418903 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:51:36.418924 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:51:36.418942 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:51:36.418951 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:51:36.418960 | orchestrator | 2026-04-07 01:51:36.418973 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-07 01:51:36.418983 | orchestrator | Tuesday 07 April 2026 01:51:33 +0000 (0:00:01.169) 0:03:54.465 ********* 2026-04-07 01:51:36.418994 | orchestrator | changed: [testbed-manager] 2026-04-07 01:51:36.419005 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:51:36.419015 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:51:36.419025 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:51:36.419035 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:51:36.419045 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:51:36.419056 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:51:36.419066 | orchestrator | 2026-04-07 01:51:36.419076 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-07 01:51:36.419087 | orchestrator | Tuesday 07 April 2026 01:51:34 +0000 (0:00:01.210) 0:03:55.676 ********* 2026-04-07 01:51:36.419097 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:51:36.419107 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:51:36.419117 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:51:36.419127 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:51:36.419137 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:51:36.419147 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:51:36.419157 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:51:36.419167 | orchestrator | 2026-04-07 01:51:36.419177 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-07 01:51:36.419188 | orchestrator | Tuesday 07 April 2026 01:51:35 +0000 (0:00:00.323) 0:03:56.000 ********* 2026-04-07 01:51:36.419198 | orchestrator | ok: [testbed-manager] 2026-04-07 01:51:36.419209 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:51:36.419220 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:51:36.419230 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:51:36.419240 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:51:36.419250 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:51:36.419260 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:51:36.419270 | orchestrator | 2026-04-07 01:51:36.419281 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-07 01:51:36.419292 | orchestrator | Tuesday 07 April 2026 01:51:35 +0000 (0:00:00.806) 0:03:56.806 ********* 2026-04-07 01:51:36.419303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:51:36.419322 | orchestrator | 2026-04-07 01:51:36.419338 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-07 01:51:36.419361 | orchestrator | Tuesday 07 April 2026 01:51:36 +0000 (0:00:00.500) 0:03:57.307 ********* 2026-04-07 01:52:56.442164 | orchestrator | ok: [testbed-manager] 2026-04-07 01:52:56.442262 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:52:56.442271 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:52:56.442277 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:52:56.442281 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:52:56.442286 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:52:56.442291 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:52:56.442296 | orchestrator | 2026-04-07 01:52:56.442302 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-07 01:52:56.442309 | orchestrator | Tuesday 07 April 2026 01:51:44 +0000 (0:00:07.895) 0:04:05.202 ********* 2026-04-07 01:52:56.442313 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:52:56.442319 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:52:56.442326 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:52:56.442334 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:52:56.442340 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:52:56.442347 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:52:56.442354 | orchestrator | ok: [testbed-manager] 2026-04-07 01:52:56.442361 | orchestrator | 2026-04-07 01:52:56.442368 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-07 01:52:56.442375 | orchestrator | Tuesday 07 April 2026 01:51:45 +0000 (0:00:01.370) 0:04:06.573 ********* 2026-04-07 01:52:56.442383 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:52:56.442390 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:52:56.442397 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:52:56.442405 | orchestrator | ok: [testbed-manager] 2026-04-07 01:52:56.442413 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:52:56.442419 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:52:56.442424 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:52:56.442428 | orchestrator | 2026-04-07 01:52:56.442433 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-07 01:52:56.442437 | orchestrator | Tuesday 07 April 2026 01:51:46 +0000 (0:00:01.318) 0:04:07.891 ********* 2026-04-07 01:52:56.442442 | orchestrator | ok: [testbed-manager] 2026-04-07 01:52:56.442446 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:52:56.442451 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:52:56.442455 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:52:56.442460 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:52:56.442465 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:52:56.442469 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:52:56.442474 | orchestrator | 2026-04-07 01:52:56.442479 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-07 01:52:56.442484 | orchestrator | Tuesday 07 April 2026 01:51:47 +0000 (0:00:00.394) 0:04:08.286 ********* 2026-04-07 01:52:56.442489 | orchestrator | ok: [testbed-manager] 2026-04-07 01:52:56.442493 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:52:56.442498 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:52:56.442502 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:52:56.442507 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:52:56.442511 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:52:56.442515 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:52:56.442520 | orchestrator | 2026-04-07 01:52:56.442524 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-07 01:52:56.442529 | orchestrator | Tuesday 07 April 2026 01:51:47 +0000 (0:00:00.416) 0:04:08.702 ********* 2026-04-07 01:52:56.442534 | orchestrator | ok: [testbed-manager] 2026-04-07 01:52:56.442538 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:52:56.442542 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:52:56.442562 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:52:56.442567 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:52:56.442572 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:52:56.442576 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:52:56.442581 | orchestrator | 2026-04-07 01:52:56.442585 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-07 01:52:56.442590 | orchestrator | Tuesday 07 April 2026 01:51:48 +0000 (0:00:00.351) 0:04:09.054 ********* 2026-04-07 01:52:56.442595 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:52:56.442599 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:52:56.442604 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:52:56.442662 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:52:56.442667 | orchestrator | ok: [testbed-manager] 2026-04-07 01:52:56.442672 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:52:56.442676 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:52:56.442681 | orchestrator | 2026-04-07 01:52:56.442686 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-07 01:52:56.442690 | orchestrator | Tuesday 07 April 2026 01:51:53 +0000 (0:00:05.622) 0:04:14.676 ********* 2026-04-07 01:52:56.442697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:52:56.442704 | orchestrator | 2026-04-07 01:52:56.442708 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-07 01:52:56.442713 | orchestrator | Tuesday 07 April 2026 01:51:54 +0000 (0:00:00.516) 0:04:15.193 ********* 2026-04-07 01:52:56.442717 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-07 01:52:56.442723 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-07 01:52:56.442741 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-07 01:52:56.442746 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-07 01:52:56.442754 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:52:56.442779 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-07 01:52:56.442789 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-07 01:52:56.442797 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:52:56.442804 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-07 01:52:56.442811 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-07 01:52:56.442818 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:52:56.442825 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-07 01:52:56.442834 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:52:56.442841 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-07 01:52:56.442850 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-07 01:52:56.442858 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-07 01:52:56.442880 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:52:56.442886 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:52:56.442892 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-07 01:52:56.442897 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-07 01:52:56.442902 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:52:56.442907 | orchestrator | 2026-04-07 01:52:56.442912 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-07 01:52:56.442918 | orchestrator | Tuesday 07 April 2026 01:51:54 +0000 (0:00:00.476) 0:04:15.669 ********* 2026-04-07 01:52:56.442923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:52:56.442929 | orchestrator | 2026-04-07 01:52:56.442934 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-07 01:52:56.442946 | orchestrator | Tuesday 07 April 2026 01:51:55 +0000 (0:00:00.517) 0:04:16.187 ********* 2026-04-07 01:52:56.442952 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-07 01:52:56.442957 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-07 01:52:56.442962 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:52:56.442967 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-07 01:52:56.442973 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:52:56.442978 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:52:56.442984 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-07 01:52:56.442989 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:52:56.442994 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-07 01:52:56.442999 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-07 01:52:56.443004 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:52:56.443009 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:52:56.443013 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-07 01:52:56.443018 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:52:56.443022 | orchestrator | 2026-04-07 01:52:56.443027 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-07 01:52:56.443031 | orchestrator | Tuesday 07 April 2026 01:51:55 +0000 (0:00:00.440) 0:04:16.627 ********* 2026-04-07 01:52:56.443036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:52:56.443041 | orchestrator | 2026-04-07 01:52:56.443045 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-07 01:52:56.443050 | orchestrator | Tuesday 07 April 2026 01:51:56 +0000 (0:00:00.595) 0:04:17.223 ********* 2026-04-07 01:52:56.443054 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:52:56.443059 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:52:56.443063 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:52:56.443068 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:52:56.443076 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:52:56.443081 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:52:56.443086 | orchestrator | changed: [testbed-manager] 2026-04-07 01:52:56.443090 | orchestrator | 2026-04-07 01:52:56.443095 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-07 01:52:56.443099 | orchestrator | Tuesday 07 April 2026 01:52:32 +0000 (0:00:35.962) 0:04:53.185 ********* 2026-04-07 01:52:56.443104 | orchestrator | changed: [testbed-manager] 2026-04-07 01:52:56.443108 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:52:56.443113 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:52:56.443117 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:52:56.443122 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:52:56.443126 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:52:56.443131 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:52:56.443135 | orchestrator | 2026-04-07 01:52:56.443140 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-07 01:52:56.443144 | orchestrator | Tuesday 07 April 2026 01:52:40 +0000 (0:00:08.385) 0:05:01.571 ********* 2026-04-07 01:52:56.443149 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:52:56.443153 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:52:56.443158 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:52:56.443162 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:52:56.443166 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:52:56.443171 | orchestrator | changed: [testbed-manager] 2026-04-07 01:52:56.443175 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:52:56.443180 | orchestrator | 2026-04-07 01:52:56.443184 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-07 01:52:56.443194 | orchestrator | Tuesday 07 April 2026 01:52:48 +0000 (0:00:07.953) 0:05:09.524 ********* 2026-04-07 01:52:56.443199 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:52:56.443203 | orchestrator | ok: [testbed-manager] 2026-04-07 01:52:56.443208 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:52:56.443212 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:52:56.443217 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:52:56.443222 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:52:56.443230 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:52:56.443241 | orchestrator | 2026-04-07 01:52:56.443249 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-07 01:52:56.443256 | orchestrator | Tuesday 07 April 2026 01:52:50 +0000 (0:00:01.792) 0:05:11.317 ********* 2026-04-07 01:52:56.443263 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:52:56.443270 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:52:56.443276 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:52:56.443283 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:52:56.443289 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:52:56.443296 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:52:56.443303 | orchestrator | changed: [testbed-manager] 2026-04-07 01:52:56.443310 | orchestrator | 2026-04-07 01:52:56.443322 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-07 01:53:09.283244 | orchestrator | Tuesday 07 April 2026 01:52:56 +0000 (0:00:06.009) 0:05:17.326 ********* 2026-04-07 01:53:09.283331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:53:09.283343 | orchestrator | 2026-04-07 01:53:09.283350 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-07 01:53:09.283356 | orchestrator | Tuesday 07 April 2026 01:52:57 +0000 (0:00:00.659) 0:05:17.985 ********* 2026-04-07 01:53:09.283362 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:53:09.283369 | orchestrator | changed: [testbed-manager] 2026-04-07 01:53:09.283375 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:53:09.283380 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:53:09.283386 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:53:09.283391 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:53:09.283397 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:53:09.283403 | orchestrator | 2026-04-07 01:53:09.283409 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-07 01:53:09.283414 | orchestrator | Tuesday 07 April 2026 01:52:57 +0000 (0:00:00.764) 0:05:18.750 ********* 2026-04-07 01:53:09.283420 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:53:09.283427 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:53:09.283441 | orchestrator | ok: [testbed-manager] 2026-04-07 01:53:09.283447 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:53:09.283452 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:53:09.283457 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:53:09.283462 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:53:09.283468 | orchestrator | 2026-04-07 01:53:09.283473 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-07 01:53:09.283479 | orchestrator | Tuesday 07 April 2026 01:52:59 +0000 (0:00:01.845) 0:05:20.596 ********* 2026-04-07 01:53:09.283484 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:53:09.283490 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:53:09.283495 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:53:09.283500 | orchestrator | changed: [testbed-manager] 2026-04-07 01:53:09.283506 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:53:09.283512 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:53:09.283517 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:53:09.283522 | orchestrator | 2026-04-07 01:53:09.283528 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-07 01:53:09.283533 | orchestrator | Tuesday 07 April 2026 01:53:00 +0000 (0:00:00.900) 0:05:21.497 ********* 2026-04-07 01:53:09.283555 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:53:09.283561 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:53:09.283566 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:53:09.283571 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:53:09.283577 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:53:09.283582 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:53:09.283587 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:53:09.283592 | orchestrator | 2026-04-07 01:53:09.283598 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-07 01:53:09.283643 | orchestrator | Tuesday 07 April 2026 01:53:00 +0000 (0:00:00.304) 0:05:21.801 ********* 2026-04-07 01:53:09.283652 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:53:09.283660 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:53:09.283669 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:53:09.283691 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:53:09.283700 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:53:09.283709 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:53:09.283717 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:53:09.283722 | orchestrator | 2026-04-07 01:53:09.283728 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-07 01:53:09.283733 | orchestrator | Tuesday 07 April 2026 01:53:01 +0000 (0:00:00.447) 0:05:22.249 ********* 2026-04-07 01:53:09.283741 | orchestrator | ok: [testbed-manager] 2026-04-07 01:53:09.283749 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:53:09.283758 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:53:09.283766 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:53:09.283774 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:53:09.283782 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:53:09.283790 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:53:09.283798 | orchestrator | 2026-04-07 01:53:09.283806 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-07 01:53:09.283814 | orchestrator | Tuesday 07 April 2026 01:53:01 +0000 (0:00:00.342) 0:05:22.591 ********* 2026-04-07 01:53:09.283823 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:53:09.283831 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:53:09.283840 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:53:09.283849 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:53:09.283858 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:53:09.283867 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:53:09.283875 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:53:09.283884 | orchestrator | 2026-04-07 01:53:09.283893 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-07 01:53:09.283904 | orchestrator | Tuesday 07 April 2026 01:53:02 +0000 (0:00:00.330) 0:05:22.922 ********* 2026-04-07 01:53:09.283913 | orchestrator | ok: [testbed-manager] 2026-04-07 01:53:09.283923 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:53:09.283930 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:53:09.283937 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:53:09.283943 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:53:09.283949 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:53:09.283955 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:53:09.283962 | orchestrator | 2026-04-07 01:53:09.283969 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-07 01:53:09.283978 | orchestrator | Tuesday 07 April 2026 01:53:02 +0000 (0:00:00.328) 0:05:23.250 ********* 2026-04-07 01:53:09.283986 | orchestrator | ok: [testbed-manager] =>  2026-04-07 01:53:09.283995 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 01:53:09.284003 | orchestrator | ok: [testbed-node-3] =>  2026-04-07 01:53:09.284012 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 01:53:09.284021 | orchestrator | ok: [testbed-node-4] =>  2026-04-07 01:53:09.284030 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 01:53:09.284039 | orchestrator | ok: [testbed-node-5] =>  2026-04-07 01:53:09.284047 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 01:53:09.284084 | orchestrator | ok: [testbed-node-0] =>  2026-04-07 01:53:09.284094 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 01:53:09.284103 | orchestrator | ok: [testbed-node-1] =>  2026-04-07 01:53:09.284112 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 01:53:09.284121 | orchestrator | ok: [testbed-node-2] =>  2026-04-07 01:53:09.284130 | orchestrator |  docker_version: 5:27.5.1 2026-04-07 01:53:09.284139 | orchestrator | 2026-04-07 01:53:09.284148 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-07 01:53:09.284158 | orchestrator | Tuesday 07 April 2026 01:53:02 +0000 (0:00:00.318) 0:05:23.568 ********* 2026-04-07 01:53:09.284167 | orchestrator | ok: [testbed-manager] =>  2026-04-07 01:53:09.284176 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 01:53:09.284185 | orchestrator | ok: [testbed-node-3] =>  2026-04-07 01:53:09.284193 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 01:53:09.284202 | orchestrator | ok: [testbed-node-4] =>  2026-04-07 01:53:09.284211 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 01:53:09.284220 | orchestrator | ok: [testbed-node-5] =>  2026-04-07 01:53:09.284228 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 01:53:09.284237 | orchestrator | ok: [testbed-node-0] =>  2026-04-07 01:53:09.284245 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 01:53:09.284253 | orchestrator | ok: [testbed-node-1] =>  2026-04-07 01:53:09.284261 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 01:53:09.284270 | orchestrator | ok: [testbed-node-2] =>  2026-04-07 01:53:09.284277 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-07 01:53:09.284285 | orchestrator | 2026-04-07 01:53:09.284294 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-07 01:53:09.284303 | orchestrator | Tuesday 07 April 2026 01:53:03 +0000 (0:00:00.361) 0:05:23.930 ********* 2026-04-07 01:53:09.284312 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:53:09.284321 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:53:09.284330 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:53:09.284338 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:53:09.284347 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:53:09.284355 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:53:09.284364 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:53:09.284372 | orchestrator | 2026-04-07 01:53:09.284381 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-07 01:53:09.284389 | orchestrator | Tuesday 07 April 2026 01:53:03 +0000 (0:00:00.305) 0:05:24.236 ********* 2026-04-07 01:53:09.284394 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:53:09.284400 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:53:09.284405 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:53:09.284410 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:53:09.284415 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:53:09.284420 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:53:09.284426 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:53:09.284431 | orchestrator | 2026-04-07 01:53:09.284436 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-07 01:53:09.284442 | orchestrator | Tuesday 07 April 2026 01:53:03 +0000 (0:00:00.374) 0:05:24.611 ********* 2026-04-07 01:53:09.284449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:53:09.284456 | orchestrator | 2026-04-07 01:53:09.284468 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-07 01:53:09.284474 | orchestrator | Tuesday 07 April 2026 01:53:04 +0000 (0:00:00.501) 0:05:25.112 ********* 2026-04-07 01:53:09.284479 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:53:09.284484 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:53:09.284490 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:53:09.284495 | orchestrator | ok: [testbed-manager] 2026-04-07 01:53:09.284502 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:53:09.284518 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:53:09.284526 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:53:09.284534 | orchestrator | 2026-04-07 01:53:09.284542 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-07 01:53:09.284550 | orchestrator | Tuesday 07 April 2026 01:53:05 +0000 (0:00:00.982) 0:05:26.094 ********* 2026-04-07 01:53:09.284559 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:53:09.284568 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:53:09.284576 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:53:09.284585 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:53:09.284593 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:53:09.284617 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:53:09.284627 | orchestrator | ok: [testbed-manager] 2026-04-07 01:53:09.284635 | orchestrator | 2026-04-07 01:53:09.284643 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-07 01:53:09.284654 | orchestrator | Tuesday 07 April 2026 01:53:08 +0000 (0:00:03.584) 0:05:29.679 ********* 2026-04-07 01:53:09.284662 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-07 01:53:09.284672 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-07 01:53:09.284681 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-07 01:53:09.284689 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:53:09.284698 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-07 01:53:09.284706 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-07 01:53:09.284714 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-07 01:53:09.284721 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-07 01:53:09.284728 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-07 01:53:09.284736 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-07 01:53:09.284744 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:53:09.284753 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-07 01:53:09.284762 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-07 01:53:09.284771 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-07 01:53:09.284779 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:53:09.284788 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-07 01:53:09.284805 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-07 01:54:11.744335 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-07 01:54:11.744412 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:11.744421 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-07 01:54:11.744427 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-07 01:54:11.744432 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-07 01:54:11.744437 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:11.744443 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:11.744448 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-07 01:54:11.744453 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-07 01:54:11.744458 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-07 01:54:11.744462 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:11.744467 | orchestrator | 2026-04-07 01:54:11.744473 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-07 01:54:11.744488 | orchestrator | Tuesday 07 April 2026 01:53:09 +0000 (0:00:00.712) 0:05:30.391 ********* 2026-04-07 01:54:11.744493 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.744498 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.744503 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.744508 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.744513 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.744518 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.744545 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.744550 | orchestrator | 2026-04-07 01:54:11.744555 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-07 01:54:11.744559 | orchestrator | Tuesday 07 April 2026 01:53:16 +0000 (0:00:07.195) 0:05:37.587 ********* 2026-04-07 01:54:11.744564 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.744569 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.744574 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.744579 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.744583 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.744588 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.744639 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.744644 | orchestrator | 2026-04-07 01:54:11.744649 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-07 01:54:11.744654 | orchestrator | Tuesday 07 April 2026 01:53:17 +0000 (0:00:01.090) 0:05:38.677 ********* 2026-04-07 01:54:11.744659 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.744664 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.744668 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.744673 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.744678 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.744683 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.744687 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.744692 | orchestrator | 2026-04-07 01:54:11.744697 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-07 01:54:11.744702 | orchestrator | Tuesday 07 April 2026 01:53:26 +0000 (0:00:08.289) 0:05:46.967 ********* 2026-04-07 01:54:11.744707 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.744711 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.744716 | orchestrator | changed: [testbed-manager] 2026-04-07 01:54:11.744721 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.744726 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.744730 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.744735 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.744740 | orchestrator | 2026-04-07 01:54:11.744745 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-07 01:54:11.744750 | orchestrator | Tuesday 07 April 2026 01:53:29 +0000 (0:00:03.531) 0:05:50.498 ********* 2026-04-07 01:54:11.744755 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.744759 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.744765 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.744769 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.744782 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.744787 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.744792 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.744797 | orchestrator | 2026-04-07 01:54:11.744802 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-07 01:54:11.744813 | orchestrator | Tuesday 07 April 2026 01:53:30 +0000 (0:00:01.282) 0:05:51.781 ********* 2026-04-07 01:54:11.744817 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.744822 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.744827 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.744832 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.744836 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.744841 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.744846 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.744851 | orchestrator | 2026-04-07 01:54:11.744856 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-07 01:54:11.744861 | orchestrator | Tuesday 07 April 2026 01:53:32 +0000 (0:00:01.603) 0:05:53.385 ********* 2026-04-07 01:54:11.744865 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:11.744870 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:11.744875 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:11.744880 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:11.744889 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:11.744893 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:11.744898 | orchestrator | changed: [testbed-manager] 2026-04-07 01:54:11.744903 | orchestrator | 2026-04-07 01:54:11.744908 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-07 01:54:11.744913 | orchestrator | Tuesday 07 April 2026 01:53:33 +0000 (0:00:00.674) 0:05:54.059 ********* 2026-04-07 01:54:11.744917 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.744922 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.744927 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.744932 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.744937 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.744941 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.744946 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.744951 | orchestrator | 2026-04-07 01:54:11.744956 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-07 01:54:11.744972 | orchestrator | Tuesday 07 April 2026 01:53:43 +0000 (0:00:10.230) 0:06:04.289 ********* 2026-04-07 01:54:11.744977 | orchestrator | changed: [testbed-manager] 2026-04-07 01:54:11.744982 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.744987 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.744991 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.744996 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.745001 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.745005 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.745010 | orchestrator | 2026-04-07 01:54:11.745015 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-07 01:54:11.745020 | orchestrator | Tuesday 07 April 2026 01:53:44 +0000 (0:00:00.931) 0:06:05.221 ********* 2026-04-07 01:54:11.745025 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.745029 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.745034 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.745039 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.745044 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.745048 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.745053 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.745058 | orchestrator | 2026-04-07 01:54:11.745063 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-07 01:54:11.745067 | orchestrator | Tuesday 07 April 2026 01:53:53 +0000 (0:00:09.420) 0:06:14.641 ********* 2026-04-07 01:54:11.745072 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.745077 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.745082 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.745087 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.745091 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.745096 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.745101 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.745106 | orchestrator | 2026-04-07 01:54:11.745110 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-07 01:54:11.745115 | orchestrator | Tuesday 07 April 2026 01:54:04 +0000 (0:00:10.941) 0:06:25.583 ********* 2026-04-07 01:54:11.745120 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-07 01:54:11.745125 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-07 01:54:11.745130 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-07 01:54:11.745135 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-07 01:54:11.745139 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-07 01:54:11.745144 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-07 01:54:11.745149 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-07 01:54:11.745154 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-07 01:54:11.745158 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-07 01:54:11.745167 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-07 01:54:11.745172 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-07 01:54:11.745211 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-07 01:54:11.745217 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-07 01:54:11.745222 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-07 01:54:11.745226 | orchestrator | 2026-04-07 01:54:11.745231 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-07 01:54:11.745236 | orchestrator | Tuesday 07 April 2026 01:54:05 +0000 (0:00:01.262) 0:06:26.845 ********* 2026-04-07 01:54:11.745244 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:11.745249 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:11.745253 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:11.745258 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:11.745263 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:11.745268 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:11.745272 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:11.745277 | orchestrator | 2026-04-07 01:54:11.745282 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-07 01:54:11.745287 | orchestrator | Tuesday 07 April 2026 01:54:06 +0000 (0:00:00.575) 0:06:27.421 ********* 2026-04-07 01:54:11.745291 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:11.745296 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:11.745303 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:11.745310 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:11.745318 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:11.745326 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:11.745338 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:11.745347 | orchestrator | 2026-04-07 01:54:11.745355 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-07 01:54:11.745363 | orchestrator | Tuesday 07 April 2026 01:54:10 +0000 (0:00:04.127) 0:06:31.548 ********* 2026-04-07 01:54:11.745371 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:11.745379 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:11.745387 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:11.745395 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:11.745402 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:11.745409 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:11.745417 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:11.745424 | orchestrator | 2026-04-07 01:54:11.745433 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-07 01:54:11.745440 | orchestrator | Tuesday 07 April 2026 01:54:11 +0000 (0:00:00.553) 0:06:32.101 ********* 2026-04-07 01:54:11.745448 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-07 01:54:11.745456 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-07 01:54:11.745465 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:11.745473 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-07 01:54:11.745481 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-07 01:54:11.745490 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:11.745497 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-07 01:54:11.745506 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-07 01:54:11.745514 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:11.745529 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-07 01:54:32.631180 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-07 01:54:32.631297 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:32.631314 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-07 01:54:32.631327 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-07 01:54:32.631338 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:32.631374 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-07 01:54:32.631387 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-07 01:54:32.631399 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:32.631410 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-07 01:54:32.631421 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-07 01:54:32.631432 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:32.631444 | orchestrator | 2026-04-07 01:54:32.631458 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-07 01:54:32.631470 | orchestrator | Tuesday 07 April 2026 01:54:12 +0000 (0:00:00.875) 0:06:32.976 ********* 2026-04-07 01:54:32.631481 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:32.631493 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:32.631504 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:32.631515 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:32.631527 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:32.631538 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:32.631549 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:32.631560 | orchestrator | 2026-04-07 01:54:32.631572 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-07 01:54:32.631584 | orchestrator | Tuesday 07 April 2026 01:54:12 +0000 (0:00:00.543) 0:06:33.520 ********* 2026-04-07 01:54:32.631710 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:32.631732 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:32.631751 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:32.631768 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:32.631787 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:32.631804 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:32.631822 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:32.631841 | orchestrator | 2026-04-07 01:54:32.631861 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-07 01:54:32.631881 | orchestrator | Tuesday 07 April 2026 01:54:13 +0000 (0:00:00.576) 0:06:34.096 ********* 2026-04-07 01:54:32.631901 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:32.631921 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:32.631940 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:32.631959 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:32.631988 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:32.632007 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:32.632025 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:32.632043 | orchestrator | 2026-04-07 01:54:32.632062 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-07 01:54:32.632081 | orchestrator | Tuesday 07 April 2026 01:54:13 +0000 (0:00:00.542) 0:06:34.639 ********* 2026-04-07 01:54:32.632100 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.632120 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:32.632138 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:32.632157 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:32.632176 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:32.632195 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:32.632213 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:32.632231 | orchestrator | 2026-04-07 01:54:32.632251 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-07 01:54:32.632271 | orchestrator | Tuesday 07 April 2026 01:54:15 +0000 (0:00:01.945) 0:06:36.584 ********* 2026-04-07 01:54:32.632291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:54:32.632313 | orchestrator | 2026-04-07 01:54:32.632332 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-07 01:54:32.632351 | orchestrator | Tuesday 07 April 2026 01:54:16 +0000 (0:00:00.901) 0:06:37.485 ********* 2026-04-07 01:54:32.632395 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.632417 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:32.632436 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:32.632455 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:32.632474 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:32.632493 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:32.632511 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:32.632529 | orchestrator | 2026-04-07 01:54:32.632546 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-07 01:54:32.632565 | orchestrator | Tuesday 07 April 2026 01:54:17 +0000 (0:00:00.881) 0:06:38.367 ********* 2026-04-07 01:54:32.632585 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.632634 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:32.632651 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:32.632669 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:32.632687 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:32.632706 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:32.632726 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:32.632744 | orchestrator | 2026-04-07 01:54:32.632762 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-07 01:54:32.632780 | orchestrator | Tuesday 07 April 2026 01:54:18 +0000 (0:00:00.869) 0:06:39.237 ********* 2026-04-07 01:54:32.632799 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.632819 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:32.632838 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:32.632857 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:32.632874 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:32.632893 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:32.632911 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:32.632929 | orchestrator | 2026-04-07 01:54:32.632950 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-07 01:54:32.632996 | orchestrator | Tuesday 07 April 2026 01:54:20 +0000 (0:00:01.684) 0:06:40.921 ********* 2026-04-07 01:54:32.633016 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:32.633034 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:32.633053 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:32.633072 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:32.633091 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:32.633109 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:32.633127 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:32.633146 | orchestrator | 2026-04-07 01:54:32.633166 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-07 01:54:32.633185 | orchestrator | Tuesday 07 April 2026 01:54:21 +0000 (0:00:01.378) 0:06:42.300 ********* 2026-04-07 01:54:32.633203 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.633221 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:32.633239 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:32.633258 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:32.633277 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:32.633296 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:32.633315 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:32.633334 | orchestrator | 2026-04-07 01:54:32.633352 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-07 01:54:32.633371 | orchestrator | Tuesday 07 April 2026 01:54:22 +0000 (0:00:01.350) 0:06:43.650 ********* 2026-04-07 01:54:32.633391 | orchestrator | changed: [testbed-manager] 2026-04-07 01:54:32.633410 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:32.633428 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:32.633447 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:32.633465 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:32.633483 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:32.633503 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:32.633522 | orchestrator | 2026-04-07 01:54:32.633556 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-07 01:54:32.633575 | orchestrator | Tuesday 07 April 2026 01:54:24 +0000 (0:00:01.403) 0:06:45.053 ********* 2026-04-07 01:54:32.633617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:54:32.633638 | orchestrator | 2026-04-07 01:54:32.633656 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-07 01:54:32.633674 | orchestrator | Tuesday 07 April 2026 01:54:25 +0000 (0:00:01.089) 0:06:46.143 ********* 2026-04-07 01:54:32.633692 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:32.633711 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.633730 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:32.633749 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:32.633769 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:32.633787 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:32.633805 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:32.633823 | orchestrator | 2026-04-07 01:54:32.633842 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-07 01:54:32.633863 | orchestrator | Tuesday 07 April 2026 01:54:26 +0000 (0:00:01.425) 0:06:47.568 ********* 2026-04-07 01:54:32.633876 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.633888 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:32.633899 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:32.633910 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:32.633921 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:32.633948 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:32.633960 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:32.633971 | orchestrator | 2026-04-07 01:54:32.633983 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-07 01:54:32.633994 | orchestrator | Tuesday 07 April 2026 01:54:27 +0000 (0:00:01.161) 0:06:48.730 ********* 2026-04-07 01:54:32.634005 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.634070 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:32.634085 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:32.634096 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:32.634108 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:32.634119 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:32.634130 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:32.634141 | orchestrator | 2026-04-07 01:54:32.634152 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-07 01:54:32.634163 | orchestrator | Tuesday 07 April 2026 01:54:28 +0000 (0:00:01.153) 0:06:49.884 ********* 2026-04-07 01:54:32.634174 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:32.634186 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:32.634197 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:32.634208 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:32.634219 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:32.634267 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:32.634279 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:32.634290 | orchestrator | 2026-04-07 01:54:32.634302 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-07 01:54:32.634313 | orchestrator | Tuesday 07 April 2026 01:54:31 +0000 (0:00:02.378) 0:06:52.262 ********* 2026-04-07 01:54:32.634324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:54:32.634336 | orchestrator | 2026-04-07 01:54:32.634347 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 01:54:32.634359 | orchestrator | Tuesday 07 April 2026 01:54:32 +0000 (0:00:00.926) 0:06:53.188 ********* 2026-04-07 01:54:32.634370 | orchestrator | 2026-04-07 01:54:32.634381 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 01:54:32.634406 | orchestrator | Tuesday 07 April 2026 01:54:32 +0000 (0:00:00.041) 0:06:53.230 ********* 2026-04-07 01:54:32.634417 | orchestrator | 2026-04-07 01:54:32.634429 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 01:54:32.634440 | orchestrator | Tuesday 07 April 2026 01:54:32 +0000 (0:00:00.041) 0:06:53.271 ********* 2026-04-07 01:54:32.634451 | orchestrator | 2026-04-07 01:54:32.634463 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 01:54:32.634487 | orchestrator | Tuesday 07 April 2026 01:54:32 +0000 (0:00:00.061) 0:06:53.333 ********* 2026-04-07 01:54:59.248643 | orchestrator | 2026-04-07 01:54:59.248788 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 01:54:59.248802 | orchestrator | Tuesday 07 April 2026 01:54:32 +0000 (0:00:00.050) 0:06:53.383 ********* 2026-04-07 01:54:59.248810 | orchestrator | 2026-04-07 01:54:59.248818 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 01:54:59.248825 | orchestrator | Tuesday 07 April 2026 01:54:32 +0000 (0:00:00.040) 0:06:53.424 ********* 2026-04-07 01:54:59.248833 | orchestrator | 2026-04-07 01:54:59.248840 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-07 01:54:59.248847 | orchestrator | Tuesday 07 April 2026 01:54:32 +0000 (0:00:00.048) 0:06:53.473 ********* 2026-04-07 01:54:59.248868 | orchestrator | 2026-04-07 01:54:59.249596 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-07 01:54:59.249611 | orchestrator | Tuesday 07 April 2026 01:54:32 +0000 (0:00:00.042) 0:06:53.516 ********* 2026-04-07 01:54:59.249621 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:59.249631 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:59.249638 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:59.249645 | orchestrator | 2026-04-07 01:54:59.249652 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-07 01:54:59.249660 | orchestrator | Tuesday 07 April 2026 01:54:33 +0000 (0:00:01.178) 0:06:54.694 ********* 2026-04-07 01:54:59.249667 | orchestrator | changed: [testbed-manager] 2026-04-07 01:54:59.249676 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:59.249683 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:59.249690 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:59.249697 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:59.249704 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:59.249711 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:59.249718 | orchestrator | 2026-04-07 01:54:59.249726 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-07 01:54:59.249733 | orchestrator | Tuesday 07 April 2026 01:54:35 +0000 (0:00:01.550) 0:06:56.245 ********* 2026-04-07 01:54:59.249740 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:59.249747 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:59.249754 | orchestrator | changed: [testbed-manager] 2026-04-07 01:54:59.249761 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:59.249768 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:59.249775 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:59.249781 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:59.249788 | orchestrator | 2026-04-07 01:54:59.249796 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-07 01:54:59.249803 | orchestrator | Tuesday 07 April 2026 01:54:36 +0000 (0:00:01.167) 0:06:57.413 ********* 2026-04-07 01:54:59.249810 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:59.249817 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:59.249824 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:59.249831 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:59.249838 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:59.249845 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:59.249852 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:59.249859 | orchestrator | 2026-04-07 01:54:59.249866 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-07 01:54:59.249873 | orchestrator | Tuesday 07 April 2026 01:54:38 +0000 (0:00:02.411) 0:06:59.824 ********* 2026-04-07 01:54:59.249924 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:59.249932 | orchestrator | 2026-04-07 01:54:59.249940 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-07 01:54:59.249947 | orchestrator | Tuesday 07 April 2026 01:54:39 +0000 (0:00:00.130) 0:06:59.955 ********* 2026-04-07 01:54:59.249954 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:59.249961 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:59.249968 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:59.249975 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:59.249982 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:54:59.249989 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:59.249996 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:59.250003 | orchestrator | 2026-04-07 01:54:59.250010 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-07 01:54:59.250085 | orchestrator | Tuesday 07 April 2026 01:54:40 +0000 (0:00:01.064) 0:07:01.019 ********* 2026-04-07 01:54:59.250093 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:59.250100 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:59.250107 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:59.250114 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:59.250121 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:59.250128 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:59.250135 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:59.250142 | orchestrator | 2026-04-07 01:54:59.250149 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-07 01:54:59.250156 | orchestrator | Tuesday 07 April 2026 01:54:40 +0000 (0:00:00.619) 0:07:01.639 ********* 2026-04-07 01:54:59.250165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:54:59.250175 | orchestrator | 2026-04-07 01:54:59.250182 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-07 01:54:59.250189 | orchestrator | Tuesday 07 April 2026 01:54:41 +0000 (0:00:01.233) 0:07:02.872 ********* 2026-04-07 01:54:59.250196 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:59.250203 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:59.250210 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:59.250217 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:59.250224 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:59.250231 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:59.250239 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:59.250245 | orchestrator | 2026-04-07 01:54:59.250253 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-07 01:54:59.250260 | orchestrator | Tuesday 07 April 2026 01:54:42 +0000 (0:00:00.848) 0:07:03.721 ********* 2026-04-07 01:54:59.250267 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-07 01:54:59.250295 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-07 01:54:59.250304 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-07 01:54:59.250312 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-07 01:54:59.250319 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-07 01:54:59.250325 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-07 01:54:59.250333 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-07 01:54:59.250340 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-07 01:54:59.250347 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-07 01:54:59.250354 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-07 01:54:59.250361 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-07 01:54:59.250368 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-07 01:54:59.250383 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-07 01:54:59.250390 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-07 01:54:59.250397 | orchestrator | 2026-04-07 01:54:59.250404 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-07 01:54:59.250411 | orchestrator | Tuesday 07 April 2026 01:54:45 +0000 (0:00:02.513) 0:07:06.235 ********* 2026-04-07 01:54:59.250418 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:59.250425 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:59.250432 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:59.250439 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:59.250446 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:59.250453 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:59.250459 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:59.250466 | orchestrator | 2026-04-07 01:54:59.250473 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-07 01:54:59.250480 | orchestrator | Tuesday 07 April 2026 01:54:46 +0000 (0:00:00.763) 0:07:06.999 ********* 2026-04-07 01:54:59.250489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:54:59.250498 | orchestrator | 2026-04-07 01:54:59.250505 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-07 01:54:59.250512 | orchestrator | Tuesday 07 April 2026 01:54:46 +0000 (0:00:00.893) 0:07:07.892 ********* 2026-04-07 01:54:59.250519 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:59.250526 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:59.250533 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:59.250540 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:59.250547 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:59.250554 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:59.250561 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:59.250568 | orchestrator | 2026-04-07 01:54:59.250575 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-07 01:54:59.250582 | orchestrator | Tuesday 07 April 2026 01:54:47 +0000 (0:00:00.896) 0:07:08.788 ********* 2026-04-07 01:54:59.250646 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:59.250654 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:59.250661 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:59.250668 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:59.250675 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:59.250682 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:59.250689 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:59.250696 | orchestrator | 2026-04-07 01:54:59.250703 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-07 01:54:59.250711 | orchestrator | Tuesday 07 April 2026 01:54:48 +0000 (0:00:01.044) 0:07:09.832 ********* 2026-04-07 01:54:59.250718 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:59.250725 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:59.250732 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:59.250739 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:59.250746 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:59.250753 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:59.250760 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:59.250767 | orchestrator | 2026-04-07 01:54:59.250774 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-07 01:54:59.250781 | orchestrator | Tuesday 07 April 2026 01:54:49 +0000 (0:00:00.554) 0:07:10.386 ********* 2026-04-07 01:54:59.250788 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:59.250795 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:54:59.250802 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:54:59.250809 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:54:59.250816 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:54:59.250829 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:54:59.250836 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:54:59.250843 | orchestrator | 2026-04-07 01:54:59.250850 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-07 01:54:59.250857 | orchestrator | Tuesday 07 April 2026 01:54:50 +0000 (0:00:01.434) 0:07:11.821 ********* 2026-04-07 01:54:59.250865 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:54:59.250872 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:54:59.250879 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:54:59.250886 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:54:59.250893 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:54:59.250900 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:54:59.250907 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:54:59.250914 | orchestrator | 2026-04-07 01:54:59.250921 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-07 01:54:59.250928 | orchestrator | Tuesday 07 April 2026 01:54:51 +0000 (0:00:00.571) 0:07:12.392 ********* 2026-04-07 01:54:59.250935 | orchestrator | ok: [testbed-manager] 2026-04-07 01:54:59.250942 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:54:59.250949 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:54:59.250956 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:54:59.250963 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:54:59.250970 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:54:59.250983 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:55:33.154477 | orchestrator | 2026-04-07 01:55:33.154646 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-07 01:55:33.154665 | orchestrator | Tuesday 07 April 2026 01:54:59 +0000 (0:00:07.738) 0:07:20.130 ********* 2026-04-07 01:55:33.154675 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.154686 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:55:33.154696 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:55:33.154705 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:55:33.154714 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:55:33.154724 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:55:33.154733 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:55:33.154742 | orchestrator | 2026-04-07 01:55:33.154752 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-07 01:55:33.154761 | orchestrator | Tuesday 07 April 2026 01:55:00 +0000 (0:00:01.582) 0:07:21.713 ********* 2026-04-07 01:55:33.154770 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.154780 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:55:33.154789 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:55:33.154798 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:55:33.154807 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:55:33.154816 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:55:33.154825 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:55:33.154835 | orchestrator | 2026-04-07 01:55:33.154844 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-07 01:55:33.154853 | orchestrator | Tuesday 07 April 2026 01:55:02 +0000 (0:00:01.756) 0:07:23.470 ********* 2026-04-07 01:55:33.154863 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.154872 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:55:33.154881 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:55:33.154890 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:55:33.154899 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:55:33.154908 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:55:33.154918 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:55:33.154927 | orchestrator | 2026-04-07 01:55:33.154936 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-07 01:55:33.154945 | orchestrator | Tuesday 07 April 2026 01:55:04 +0000 (0:00:01.732) 0:07:25.202 ********* 2026-04-07 01:55:33.154954 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.154964 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:55:33.154973 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:55:33.155005 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:55:33.155016 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:55:33.155039 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:55:33.155050 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:55:33.155070 | orchestrator | 2026-04-07 01:55:33.155081 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-07 01:55:33.155091 | orchestrator | Tuesday 07 April 2026 01:55:05 +0000 (0:00:00.874) 0:07:26.076 ********* 2026-04-07 01:55:33.155102 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:55:33.155113 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:55:33.155123 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:55:33.155133 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:55:33.155143 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:55:33.155154 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:55:33.155165 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:55:33.155176 | orchestrator | 2026-04-07 01:55:33.155186 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-07 01:55:33.155197 | orchestrator | Tuesday 07 April 2026 01:55:06 +0000 (0:00:01.089) 0:07:27.166 ********* 2026-04-07 01:55:33.155207 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:55:33.155217 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:55:33.155228 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:55:33.155239 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:55:33.155249 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:55:33.155260 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:55:33.155270 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:55:33.155280 | orchestrator | 2026-04-07 01:55:33.155291 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-07 01:55:33.155301 | orchestrator | Tuesday 07 April 2026 01:55:06 +0000 (0:00:00.577) 0:07:27.744 ********* 2026-04-07 01:55:33.155312 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.155339 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:55:33.155350 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:55:33.155360 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:55:33.155371 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:55:33.155382 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:55:33.155392 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:55:33.155402 | orchestrator | 2026-04-07 01:55:33.155411 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-07 01:55:33.155420 | orchestrator | Tuesday 07 April 2026 01:55:07 +0000 (0:00:00.626) 0:07:28.371 ********* 2026-04-07 01:55:33.155429 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.155438 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:55:33.155447 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:55:33.155457 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:55:33.155466 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:55:33.155475 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:55:33.155484 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:55:33.155493 | orchestrator | 2026-04-07 01:55:33.155502 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-07 01:55:33.155512 | orchestrator | Tuesday 07 April 2026 01:55:08 +0000 (0:00:00.569) 0:07:28.940 ********* 2026-04-07 01:55:33.155521 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.155530 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:55:33.155539 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:55:33.155548 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:55:33.155557 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:55:33.155565 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:55:33.155574 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:55:33.155598 | orchestrator | 2026-04-07 01:55:33.155607 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-07 01:55:33.155616 | orchestrator | Tuesday 07 April 2026 01:55:08 +0000 (0:00:00.793) 0:07:29.734 ********* 2026-04-07 01:55:33.155625 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:55:33.155634 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:55:33.155651 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:55:33.155660 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:55:33.155669 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:55:33.155678 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:55:33.155687 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.155696 | orchestrator | 2026-04-07 01:55:33.155722 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-07 01:55:33.155731 | orchestrator | Tuesday 07 April 2026 01:55:14 +0000 (0:00:05.823) 0:07:35.558 ********* 2026-04-07 01:55:33.155741 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:55:33.155750 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:55:33.155759 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:55:33.155768 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:55:33.155777 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:55:33.155786 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:55:33.155795 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:55:33.155804 | orchestrator | 2026-04-07 01:55:33.155813 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-07 01:55:33.155823 | orchestrator | Tuesday 07 April 2026 01:55:15 +0000 (0:00:00.586) 0:07:36.144 ********* 2026-04-07 01:55:33.155834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:55:33.155846 | orchestrator | 2026-04-07 01:55:33.155855 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-07 01:55:33.155864 | orchestrator | Tuesday 07 April 2026 01:55:16 +0000 (0:00:01.068) 0:07:37.213 ********* 2026-04-07 01:55:33.155873 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.155883 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:55:33.155892 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:55:33.155901 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:55:33.155910 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:55:33.155919 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:55:33.155928 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:55:33.155937 | orchestrator | 2026-04-07 01:55:33.155946 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-07 01:55:33.155955 | orchestrator | Tuesday 07 April 2026 01:55:18 +0000 (0:00:01.998) 0:07:39.212 ********* 2026-04-07 01:55:33.155964 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.155973 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:55:33.155982 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:55:33.155991 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:55:33.156000 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:55:33.156009 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:55:33.156018 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:55:33.156027 | orchestrator | 2026-04-07 01:55:33.156036 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-07 01:55:33.156045 | orchestrator | Tuesday 07 April 2026 01:55:19 +0000 (0:00:01.153) 0:07:40.365 ********* 2026-04-07 01:55:33.156054 | orchestrator | ok: [testbed-manager] 2026-04-07 01:55:33.156063 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:55:33.156072 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:55:33.156081 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:55:33.156090 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:55:33.156099 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:55:33.156108 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:55:33.156117 | orchestrator | 2026-04-07 01:55:33.156126 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-07 01:55:33.156135 | orchestrator | Tuesday 07 April 2026 01:55:20 +0000 (0:00:00.857) 0:07:41.223 ********* 2026-04-07 01:55:33.156149 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 01:55:33.156160 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 01:55:33.156175 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 01:55:33.156185 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 01:55:33.156194 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 01:55:33.156203 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 01:55:33.156216 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-07 01:55:33.156231 | orchestrator | 2026-04-07 01:55:33.156245 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-07 01:55:33.156259 | orchestrator | Tuesday 07 April 2026 01:55:22 +0000 (0:00:02.163) 0:07:43.386 ********* 2026-04-07 01:55:33.156280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:55:33.156296 | orchestrator | 2026-04-07 01:55:33.156311 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-07 01:55:33.156325 | orchestrator | Tuesday 07 April 2026 01:55:23 +0000 (0:00:00.978) 0:07:44.365 ********* 2026-04-07 01:55:33.156339 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:55:33.156353 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:55:33.156366 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:55:33.156380 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:55:33.156394 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:55:33.156408 | orchestrator | changed: [testbed-manager] 2026-04-07 01:55:33.156423 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:55:33.156437 | orchestrator | 2026-04-07 01:55:33.156461 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-07 01:56:05.283438 | orchestrator | Tuesday 07 April 2026 01:55:33 +0000 (0:00:09.662) 0:07:54.028 ********* 2026-04-07 01:56:05.283550 | orchestrator | ok: [testbed-manager] 2026-04-07 01:56:05.283568 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:56:05.283643 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:56:05.283657 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:56:05.283669 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:56:05.283680 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:56:05.283692 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:56:05.283704 | orchestrator | 2026-04-07 01:56:05.283716 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-07 01:56:05.283730 | orchestrator | Tuesday 07 April 2026 01:55:35 +0000 (0:00:02.062) 0:07:56.091 ********* 2026-04-07 01:56:05.283741 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:56:05.283753 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:56:05.283764 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:56:05.283776 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:56:05.283787 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:56:05.283799 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:56:05.283811 | orchestrator | 2026-04-07 01:56:05.283822 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-07 01:56:05.283834 | orchestrator | Tuesday 07 April 2026 01:55:36 +0000 (0:00:01.319) 0:07:57.410 ********* 2026-04-07 01:56:05.283846 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.283858 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.283870 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.283882 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.283893 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.283929 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.283944 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.283957 | orchestrator | 2026-04-07 01:56:05.283970 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-07 01:56:05.283983 | orchestrator | 2026-04-07 01:56:05.283996 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-07 01:56:05.284010 | orchestrator | Tuesday 07 April 2026 01:55:37 +0000 (0:00:01.220) 0:07:58.631 ********* 2026-04-07 01:56:05.284024 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:56:05.284037 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:56:05.284050 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:56:05.284063 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:56:05.284076 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:56:05.284089 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:56:05.284103 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:56:05.284116 | orchestrator | 2026-04-07 01:56:05.284130 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-07 01:56:05.284142 | orchestrator | 2026-04-07 01:56:05.284155 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-07 01:56:05.284167 | orchestrator | Tuesday 07 April 2026 01:55:38 +0000 (0:00:00.773) 0:07:59.404 ********* 2026-04-07 01:56:05.284178 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.284189 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.284201 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.284212 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.284223 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.284235 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.284246 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.284257 | orchestrator | 2026-04-07 01:56:05.284269 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-07 01:56:05.284295 | orchestrator | Tuesday 07 April 2026 01:55:39 +0000 (0:00:01.367) 0:08:00.772 ********* 2026-04-07 01:56:05.284307 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:56:05.284318 | orchestrator | ok: [testbed-manager] 2026-04-07 01:56:05.284330 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:56:05.284341 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:56:05.284352 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:56:05.284364 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:56:05.284375 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:56:05.284387 | orchestrator | 2026-04-07 01:56:05.284398 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-07 01:56:05.284410 | orchestrator | Tuesday 07 April 2026 01:55:41 +0000 (0:00:01.592) 0:08:02.364 ********* 2026-04-07 01:56:05.284421 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:56:05.284432 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:56:05.284444 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:56:05.284455 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:56:05.284466 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:56:05.284478 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:56:05.284489 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:56:05.284500 | orchestrator | 2026-04-07 01:56:05.284512 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-07 01:56:05.284523 | orchestrator | Tuesday 07 April 2026 01:55:42 +0000 (0:00:00.540) 0:08:02.905 ********* 2026-04-07 01:56:05.284536 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:56:05.284549 | orchestrator | 2026-04-07 01:56:05.284561 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-07 01:56:05.284573 | orchestrator | Tuesday 07 April 2026 01:55:43 +0000 (0:00:01.301) 0:08:04.206 ********* 2026-04-07 01:56:05.284611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:56:05.284634 | orchestrator | 2026-04-07 01:56:05.284645 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-07 01:56:05.284657 | orchestrator | Tuesday 07 April 2026 01:55:44 +0000 (0:00:00.958) 0:08:05.165 ********* 2026-04-07 01:56:05.284668 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.284680 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.284691 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.284702 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.284714 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.284725 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.284736 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.284747 | orchestrator | 2026-04-07 01:56:05.284779 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-07 01:56:05.284791 | orchestrator | Tuesday 07 April 2026 01:55:53 +0000 (0:00:08.792) 0:08:13.957 ********* 2026-04-07 01:56:05.284802 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.284814 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.284825 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.284843 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.284862 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.284879 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.284897 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.284914 | orchestrator | 2026-04-07 01:56:05.284933 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-07 01:56:05.284951 | orchestrator | Tuesday 07 April 2026 01:55:54 +0000 (0:00:01.119) 0:08:15.077 ********* 2026-04-07 01:56:05.284969 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.284987 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.285002 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.285019 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.285038 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.285056 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.285076 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.285095 | orchestrator | 2026-04-07 01:56:05.285114 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-07 01:56:05.285133 | orchestrator | Tuesday 07 April 2026 01:55:55 +0000 (0:00:01.349) 0:08:16.426 ********* 2026-04-07 01:56:05.285154 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.285173 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.285192 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.285206 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.285217 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.285228 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.285240 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.285251 | orchestrator | 2026-04-07 01:56:05.285262 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-07 01:56:05.285273 | orchestrator | Tuesday 07 April 2026 01:55:57 +0000 (0:00:02.002) 0:08:18.428 ********* 2026-04-07 01:56:05.285285 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.285296 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.285307 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.285318 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.285330 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.285341 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.285352 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.285363 | orchestrator | 2026-04-07 01:56:05.285375 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-07 01:56:05.285386 | orchestrator | Tuesday 07 April 2026 01:55:58 +0000 (0:00:01.271) 0:08:19.700 ********* 2026-04-07 01:56:05.285397 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.285409 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.285430 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.285442 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.285453 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.285464 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.285475 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.285486 | orchestrator | 2026-04-07 01:56:05.285498 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-07 01:56:05.285509 | orchestrator | 2026-04-07 01:56:05.285529 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-07 01:56:05.285541 | orchestrator | Tuesday 07 April 2026 01:55:59 +0000 (0:00:01.158) 0:08:20.858 ********* 2026-04-07 01:56:05.285553 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:56:05.285565 | orchestrator | 2026-04-07 01:56:05.285643 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-07 01:56:05.285658 | orchestrator | Tuesday 07 April 2026 01:56:00 +0000 (0:00:00.942) 0:08:21.801 ********* 2026-04-07 01:56:05.285670 | orchestrator | ok: [testbed-manager] 2026-04-07 01:56:05.285681 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:56:05.285692 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:56:05.285704 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:56:05.285715 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:56:05.285726 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:56:05.285737 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:56:05.285749 | orchestrator | 2026-04-07 01:56:05.285760 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-07 01:56:05.285774 | orchestrator | Tuesday 07 April 2026 01:56:02 +0000 (0:00:01.152) 0:08:22.953 ********* 2026-04-07 01:56:05.285802 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:05.285826 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:05.285845 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:05.285864 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:05.285883 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:05.285903 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:05.285923 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:05.285944 | orchestrator | 2026-04-07 01:56:05.285965 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-07 01:56:05.285985 | orchestrator | Tuesday 07 April 2026 01:56:03 +0000 (0:00:01.213) 0:08:24.167 ********* 2026-04-07 01:56:05.286001 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 01:56:05.286012 | orchestrator | 2026-04-07 01:56:05.286096 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-07 01:56:05.286108 | orchestrator | Tuesday 07 April 2026 01:56:04 +0000 (0:00:01.091) 0:08:25.259 ********* 2026-04-07 01:56:05.286119 | orchestrator | ok: [testbed-manager] 2026-04-07 01:56:05.286131 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:56:05.286142 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:56:05.286154 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:56:05.286165 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:56:05.286176 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:56:05.286187 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:56:05.286198 | orchestrator | 2026-04-07 01:56:05.286224 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-07 01:56:07.016185 | orchestrator | Tuesday 07 April 2026 01:56:05 +0000 (0:00:00.905) 0:08:26.164 ********* 2026-04-07 01:56:07.016287 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:07.016307 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:07.016323 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:07.016338 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:07.016354 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:07.016364 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:07.016373 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:07.016407 | orchestrator | 2026-04-07 01:56:07.016418 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:56:07.016429 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-07 01:56:07.016440 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-07 01:56:07.016449 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-07 01:56:07.016458 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-07 01:56:07.016467 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-04-07 01:56:07.016475 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-07 01:56:07.016484 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-07 01:56:07.016493 | orchestrator | 2026-04-07 01:56:07.016502 | orchestrator | 2026-04-07 01:56:07.016511 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:56:07.016521 | orchestrator | Tuesday 07 April 2026 01:56:06 +0000 (0:00:01.162) 0:08:27.326 ********* 2026-04-07 01:56:07.016530 | orchestrator | =============================================================================== 2026-04-07 01:56:07.016538 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.78s 2026-04-07 01:56:07.016547 | orchestrator | osism.commons.packages : Download required packages -------------------- 41.43s 2026-04-07 01:56:07.016556 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.96s 2026-04-07 01:56:07.016565 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.42s 2026-04-07 01:56:07.016574 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.09s 2026-04-07 01:56:07.016655 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.75s 2026-04-07 01:56:07.016672 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.94s 2026-04-07 01:56:07.016688 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.23s 2026-04-07 01:56:07.016703 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.66s 2026-04-07 01:56:07.016718 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.42s 2026-04-07 01:56:07.016728 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.79s 2026-04-07 01:56:07.016737 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.39s 2026-04-07 01:56:07.016748 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.29s 2026-04-07 01:56:07.016759 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.95s 2026-04-07 01:56:07.016769 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.90s 2026-04-07 01:56:07.016780 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.74s 2026-04-07 01:56:07.016791 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.20s 2026-04-07 01:56:07.016801 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.01s 2026-04-07 01:56:07.016811 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.82s 2026-04-07 01:56:07.016823 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.81s 2026-04-07 01:56:07.404649 | orchestrator | + osism apply fail2ban 2026-04-07 01:56:20.814341 | orchestrator | 2026-04-07 01:56:20 | INFO  | Task 5c856e47-993b-4b6f-a780-30d653a97f15 (fail2ban) was prepared for execution. 2026-04-07 01:56:20.814451 | orchestrator | 2026-04-07 01:56:20 | INFO  | It takes a moment until task 5c856e47-993b-4b6f-a780-30d653a97f15 (fail2ban) has been started and output is visible here. 2026-04-07 01:56:44.081930 | orchestrator | 2026-04-07 01:56:44.082101 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-07 01:56:44.082123 | orchestrator | 2026-04-07 01:56:44.082136 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-07 01:56:44.082148 | orchestrator | Tuesday 07 April 2026 01:56:25 +0000 (0:00:00.294) 0:00:00.294 ********* 2026-04-07 01:56:44.082162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:56:44.082177 | orchestrator | 2026-04-07 01:56:44.082189 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-07 01:56:44.082200 | orchestrator | Tuesday 07 April 2026 01:56:26 +0000 (0:00:01.251) 0:00:01.546 ********* 2026-04-07 01:56:44.082212 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:44.082226 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:44.082238 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:44.082249 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:44.082261 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:44.082272 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:44.082283 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:44.082296 | orchestrator | 2026-04-07 01:56:44.082307 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-07 01:56:44.082319 | orchestrator | Tuesday 07 April 2026 01:56:38 +0000 (0:00:11.575) 0:00:13.121 ********* 2026-04-07 01:56:44.082331 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:44.082343 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:44.082354 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:44.082366 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:44.082377 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:44.082389 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:44.082400 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:44.082411 | orchestrator | 2026-04-07 01:56:44.082423 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-07 01:56:44.082435 | orchestrator | Tuesday 07 April 2026 01:56:40 +0000 (0:00:01.532) 0:00:14.654 ********* 2026-04-07 01:56:44.082446 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:56:44.082459 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:56:44.082470 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:56:44.082484 | orchestrator | ok: [testbed-manager] 2026-04-07 01:56:44.082497 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:56:44.082511 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:56:44.082524 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:56:44.082537 | orchestrator | 2026-04-07 01:56:44.082550 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-07 01:56:44.082564 | orchestrator | Tuesday 07 April 2026 01:56:41 +0000 (0:00:01.683) 0:00:16.337 ********* 2026-04-07 01:56:44.082651 | orchestrator | changed: [testbed-manager] 2026-04-07 01:56:44.082665 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:56:44.082691 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:56:44.082716 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:56:44.082729 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:56:44.082742 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:56:44.082756 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:56:44.082769 | orchestrator | 2026-04-07 01:56:44.082782 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:56:44.082797 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:56:44.082838 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:56:44.082853 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:56:44.082867 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:56:44.082879 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:56:44.082890 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:56:44.082902 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:56:44.082913 | orchestrator | 2026-04-07 01:56:44.082925 | orchestrator | 2026-04-07 01:56:44.082936 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:56:44.082948 | orchestrator | Tuesday 07 April 2026 01:56:43 +0000 (0:00:01.829) 0:00:18.167 ********* 2026-04-07 01:56:44.082960 | orchestrator | =============================================================================== 2026-04-07 01:56:44.082971 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.58s 2026-04-07 01:56:44.082982 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.83s 2026-04-07 01:56:44.082994 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.68s 2026-04-07 01:56:44.083005 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.53s 2026-04-07 01:56:44.083017 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.25s 2026-04-07 01:56:44.446727 | orchestrator | + osism apply network 2026-04-07 01:56:56.720757 | orchestrator | 2026-04-07 01:56:56 | INFO  | Task c23689de-81fd-4f38-b432-7f74868fd340 (network) was prepared for execution. 2026-04-07 01:56:56.720895 | orchestrator | 2026-04-07 01:56:56 | INFO  | It takes a moment until task c23689de-81fd-4f38-b432-7f74868fd340 (network) has been started and output is visible here. 2026-04-07 01:57:27.760959 | orchestrator | 2026-04-07 01:57:27.761066 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-07 01:57:27.761084 | orchestrator | 2026-04-07 01:57:27.761097 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-07 01:57:27.761110 | orchestrator | Tuesday 07 April 2026 01:57:01 +0000 (0:00:00.295) 0:00:00.295 ********* 2026-04-07 01:57:27.761121 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:27.761133 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:57:27.761145 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:57:27.761156 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:57:27.761167 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:57:27.761179 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:57:27.761190 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:57:27.761201 | orchestrator | 2026-04-07 01:57:27.761213 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-07 01:57:27.761224 | orchestrator | Tuesday 07 April 2026 01:57:01 +0000 (0:00:00.776) 0:00:01.071 ********* 2026-04-07 01:57:27.761237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:57:27.761251 | orchestrator | 2026-04-07 01:57:27.761262 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-07 01:57:27.761274 | orchestrator | Tuesday 07 April 2026 01:57:03 +0000 (0:00:01.350) 0:00:02.422 ********* 2026-04-07 01:57:27.761309 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:57:27.761321 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:57:27.761332 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:57:27.761343 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:27.761354 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:57:27.761365 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:57:27.761376 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:57:27.761388 | orchestrator | 2026-04-07 01:57:27.761400 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-07 01:57:27.761411 | orchestrator | Tuesday 07 April 2026 01:57:05 +0000 (0:00:02.023) 0:00:04.446 ********* 2026-04-07 01:57:27.761422 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:57:27.761434 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:57:27.761445 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:27.761457 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:57:27.761468 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:57:27.761479 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:57:27.761490 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:57:27.761503 | orchestrator | 2026-04-07 01:57:27.761515 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-07 01:57:27.761528 | orchestrator | Tuesday 07 April 2026 01:57:07 +0000 (0:00:02.171) 0:00:06.617 ********* 2026-04-07 01:57:27.761541 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-07 01:57:27.761555 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-07 01:57:27.761568 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-07 01:57:27.761612 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-07 01:57:27.761626 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-07 01:57:27.761640 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-07 01:57:27.761653 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-07 01:57:27.761665 | orchestrator | 2026-04-07 01:57:27.761695 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-07 01:57:27.761710 | orchestrator | Tuesday 07 April 2026 01:57:08 +0000 (0:00:01.039) 0:00:07.657 ********* 2026-04-07 01:57:27.761727 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 01:57:27.761741 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 01:57:27.761755 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 01:57:27.761769 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 01:57:27.761782 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:57:27.761794 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 01:57:27.761807 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 01:57:27.761818 | orchestrator | 2026-04-07 01:57:27.761830 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-07 01:57:27.761841 | orchestrator | Tuesday 07 April 2026 01:57:12 +0000 (0:00:03.986) 0:00:11.644 ********* 2026-04-07 01:57:27.761852 | orchestrator | changed: [testbed-manager] 2026-04-07 01:57:27.761863 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:57:27.761875 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:57:27.761886 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:57:27.761896 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:57:27.761908 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:57:27.761919 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:57:27.761930 | orchestrator | 2026-04-07 01:57:27.761941 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-07 01:57:27.761953 | orchestrator | Tuesday 07 April 2026 01:57:14 +0000 (0:00:01.699) 0:00:13.343 ********* 2026-04-07 01:57:27.761964 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 01:57:27.761975 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 01:57:27.761986 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 01:57:27.761997 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 01:57:27.762008 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 01:57:27.762082 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 01:57:27.762095 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 01:57:27.762107 | orchestrator | 2026-04-07 01:57:27.762118 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-07 01:57:27.762129 | orchestrator | Tuesday 07 April 2026 01:57:16 +0000 (0:00:01.913) 0:00:15.256 ********* 2026-04-07 01:57:27.762140 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:27.762152 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:57:27.762163 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:57:27.762183 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:57:27.762195 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:57:27.762206 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:57:27.762217 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:57:27.762228 | orchestrator | 2026-04-07 01:57:27.762240 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-07 01:57:27.762270 | orchestrator | Tuesday 07 April 2026 01:57:17 +0000 (0:00:01.312) 0:00:16.569 ********* 2026-04-07 01:57:27.762283 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:57:27.762294 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:57:27.762305 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:57:27.762317 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:57:27.762328 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:57:27.762340 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:57:27.762351 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:57:27.762362 | orchestrator | 2026-04-07 01:57:27.762374 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-07 01:57:27.762386 | orchestrator | Tuesday 07 April 2026 01:57:18 +0000 (0:00:00.710) 0:00:17.280 ********* 2026-04-07 01:57:27.762397 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:27.762408 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:57:27.762420 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:57:27.762431 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:57:27.762442 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:57:27.762454 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:57:27.762465 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:57:27.762476 | orchestrator | 2026-04-07 01:57:27.762488 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-07 01:57:27.762499 | orchestrator | Tuesday 07 April 2026 01:57:20 +0000 (0:00:02.203) 0:00:19.484 ********* 2026-04-07 01:57:27.762511 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:57:27.762522 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:57:27.762533 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:57:27.762545 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:57:27.762556 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:57:27.762567 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:57:27.762606 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-04-07 01:57:27.762619 | orchestrator | 2026-04-07 01:57:27.762631 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-07 01:57:27.762642 | orchestrator | Tuesday 07 April 2026 01:57:21 +0000 (0:00:00.985) 0:00:20.469 ********* 2026-04-07 01:57:27.762654 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:27.762665 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:57:27.762676 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:57:27.762687 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:57:27.762699 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:57:27.762710 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:57:27.762721 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:57:27.762732 | orchestrator | 2026-04-07 01:57:27.762744 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-07 01:57:27.762755 | orchestrator | Tuesday 07 April 2026 01:57:23 +0000 (0:00:01.714) 0:00:22.184 ********* 2026-04-07 01:57:27.762766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:57:27.762786 | orchestrator | 2026-04-07 01:57:27.762798 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-07 01:57:27.762809 | orchestrator | Tuesday 07 April 2026 01:57:24 +0000 (0:00:01.392) 0:00:23.577 ********* 2026-04-07 01:57:27.762820 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:57:27.762832 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:27.762843 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:57:27.762854 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:57:27.762865 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:57:27.762882 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:57:27.762893 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:57:27.762905 | orchestrator | 2026-04-07 01:57:27.762916 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-07 01:57:27.762927 | orchestrator | Tuesday 07 April 2026 01:57:25 +0000 (0:00:01.211) 0:00:24.788 ********* 2026-04-07 01:57:27.762939 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:27.762950 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:57:27.762961 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:57:27.762972 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:57:27.762983 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:57:27.762994 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:57:27.763005 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:57:27.763016 | orchestrator | 2026-04-07 01:57:27.763027 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-07 01:57:27.763039 | orchestrator | Tuesday 07 April 2026 01:57:26 +0000 (0:00:00.704) 0:00:25.493 ********* 2026-04-07 01:57:27.763050 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 01:57:27.763061 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 01:57:27.763072 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 01:57:27.763084 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 01:57:27.763095 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 01:57:27.763106 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 01:57:27.763117 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 01:57:27.763128 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 01:57:27.763139 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 01:57:27.763151 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 01:57:27.763162 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 01:57:27.763173 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 01:57:27.763184 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-07 01:57:27.763195 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-07 01:57:27.763207 | orchestrator | 2026-04-07 01:57:27.763225 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-07 01:57:46.866835 | orchestrator | Tuesday 07 April 2026 01:57:27 +0000 (0:00:01.337) 0:00:26.830 ********* 2026-04-07 01:57:46.866921 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:57:46.866931 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:57:46.866939 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:57:46.866946 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:57:46.866952 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:57:46.866959 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:57:46.866965 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:57:46.866972 | orchestrator | 2026-04-07 01:57:46.866979 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-07 01:57:46.867004 | orchestrator | Tuesday 07 April 2026 01:57:28 +0000 (0:00:00.706) 0:00:27.537 ********* 2026-04-07 01:57:46.867013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-5, testbed-node-3, testbed-node-2, testbed-node-4 2026-04-07 01:57:46.867021 | orchestrator | 2026-04-07 01:57:46.867032 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-07 01:57:46.867042 | orchestrator | Tuesday 07 April 2026 01:57:34 +0000 (0:00:05.816) 0:00:33.353 ********* 2026-04-07 01:57:46.867062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867111 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867189 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867228 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867257 | orchestrator | 2026-04-07 01:57:46.867263 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-07 01:57:46.867270 | orchestrator | Tuesday 07 April 2026 01:57:40 +0000 (0:00:06.095) 0:00:39.449 ********* 2026-04-07 01:57:46.867277 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867297 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-07 01:57:46.867341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:46.867370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:53.812392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-07 01:57:53.812503 | orchestrator | 2026-04-07 01:57:53.812522 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-07 01:57:53.812539 | orchestrator | Tuesday 07 April 2026 01:57:46 +0000 (0:00:06.481) 0:00:45.931 ********* 2026-04-07 01:57:53.812555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 01:57:53.812568 | orchestrator | 2026-04-07 01:57:53.812631 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-07 01:57:53.812645 | orchestrator | Tuesday 07 April 2026 01:57:48 +0000 (0:00:01.440) 0:00:47.371 ********* 2026-04-07 01:57:53.812658 | orchestrator | ok: [testbed-manager] 2026-04-07 01:57:53.812672 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:57:53.812685 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:57:53.812698 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:57:53.812710 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:57:53.812723 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:57:53.812735 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:57:53.812748 | orchestrator | 2026-04-07 01:57:53.812761 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-07 01:57:53.812774 | orchestrator | Tuesday 07 April 2026 01:57:49 +0000 (0:00:01.261) 0:00:48.633 ********* 2026-04-07 01:57:53.812787 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 01:57:53.812801 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 01:57:53.812813 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 01:57:53.812826 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 01:57:53.812836 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 01:57:53.812848 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 01:57:53.812861 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 01:57:53.812873 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 01:57:53.812885 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:57:53.812899 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 01:57:53.812912 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 01:57:53.812942 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 01:57:53.812957 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 01:57:53.812976 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:57:53.812988 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 01:57:53.813027 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 01:57:53.813043 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 01:57:53.813055 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 01:57:53.813069 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:57:53.813082 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 01:57:53.813096 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 01:57:53.813110 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 01:57:53.813123 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:57:53.813136 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 01:57:53.813150 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 01:57:53.813163 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 01:57:53.813176 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 01:57:53.813189 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 01:57:53.813202 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:57:53.813215 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:57:53.813229 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-07 01:57:53.813242 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-07 01:57:53.813255 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-07 01:57:53.813269 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-07 01:57:53.813282 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:57:53.813294 | orchestrator | 2026-04-07 01:57:53.813307 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-07 01:57:53.813340 | orchestrator | Tuesday 07 April 2026 01:57:51 +0000 (0:00:02.258) 0:00:50.891 ********* 2026-04-07 01:57:53.813353 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:57:53.813365 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:57:53.813377 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:57:53.813389 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:57:53.813401 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:57:53.813413 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:57:53.813425 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:57:53.813437 | orchestrator | 2026-04-07 01:57:53.813450 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-07 01:57:53.813461 | orchestrator | Tuesday 07 April 2026 01:57:52 +0000 (0:00:00.750) 0:00:51.641 ********* 2026-04-07 01:57:53.813474 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:57:53.813486 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:57:53.813497 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:57:53.813509 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:57:53.813522 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:57:53.813534 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:57:53.813547 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:57:53.813559 | orchestrator | 2026-04-07 01:57:53.813594 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:57:53.813610 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 01:57:53.813624 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:57:53.813647 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:57:53.813661 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:57:53.813675 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:57:53.813688 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:57:53.813701 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 01:57:53.813715 | orchestrator | 2026-04-07 01:57:53.813727 | orchestrator | 2026-04-07 01:57:53.813740 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:57:53.813753 | orchestrator | Tuesday 07 April 2026 01:57:53 +0000 (0:00:00.766) 0:00:52.408 ********* 2026-04-07 01:57:53.813766 | orchestrator | =============================================================================== 2026-04-07 01:57:53.813786 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.48s 2026-04-07 01:57:53.813799 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.10s 2026-04-07 01:57:53.813812 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.82s 2026-04-07 01:57:53.813826 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.99s 2026-04-07 01:57:53.813839 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.26s 2026-04-07 01:57:53.813852 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2026-04-07 01:57:53.813865 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 2.17s 2026-04-07 01:57:53.813888 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.02s 2026-04-07 01:57:53.813900 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.91s 2026-04-07 01:57:53.813912 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.71s 2026-04-07 01:57:53.813924 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.70s 2026-04-07 01:57:53.813936 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.44s 2026-04-07 01:57:53.813949 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.39s 2026-04-07 01:57:53.813961 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.35s 2026-04-07 01:57:53.813973 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.34s 2026-04-07 01:57:53.813986 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.31s 2026-04-07 01:57:53.813998 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.26s 2026-04-07 01:57:53.814010 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-04-07 01:57:53.814079 | orchestrator | osism.commons.network : Create required directories --------------------- 1.04s 2026-04-07 01:57:53.814093 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.99s 2026-04-07 01:57:54.200374 | orchestrator | + osism apply wireguard 2026-04-07 01:58:06.390517 | orchestrator | 2026-04-07 01:58:06 | INFO  | Task cb0d1815-0098-4f81-86ac-6a42a2835cc6 (wireguard) was prepared for execution. 2026-04-07 01:58:06.390691 | orchestrator | 2026-04-07 01:58:06 | INFO  | It takes a moment until task cb0d1815-0098-4f81-86ac-6a42a2835cc6 (wireguard) has been started and output is visible here. 2026-04-07 01:58:28.803976 | orchestrator | 2026-04-07 01:58:28.804071 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-07 01:58:28.804102 | orchestrator | 2026-04-07 01:58:28.804110 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-07 01:58:28.804118 | orchestrator | Tuesday 07 April 2026 01:58:11 +0000 (0:00:00.235) 0:00:00.235 ********* 2026-04-07 01:58:28.804125 | orchestrator | ok: [testbed-manager] 2026-04-07 01:58:28.804133 | orchestrator | 2026-04-07 01:58:28.804140 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-07 01:58:28.804147 | orchestrator | Tuesday 07 April 2026 01:58:13 +0000 (0:00:01.671) 0:00:01.906 ********* 2026-04-07 01:58:28.804154 | orchestrator | changed: [testbed-manager] 2026-04-07 01:58:28.804162 | orchestrator | 2026-04-07 01:58:28.804172 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-07 01:58:28.804180 | orchestrator | Tuesday 07 April 2026 01:58:20 +0000 (0:00:07.279) 0:00:09.186 ********* 2026-04-07 01:58:28.804186 | orchestrator | changed: [testbed-manager] 2026-04-07 01:58:28.804193 | orchestrator | 2026-04-07 01:58:28.804201 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-07 01:58:28.804208 | orchestrator | Tuesday 07 April 2026 01:58:21 +0000 (0:00:00.601) 0:00:09.788 ********* 2026-04-07 01:58:28.804215 | orchestrator | changed: [testbed-manager] 2026-04-07 01:58:28.804222 | orchestrator | 2026-04-07 01:58:28.804229 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-07 01:58:28.804236 | orchestrator | Tuesday 07 April 2026 01:58:21 +0000 (0:00:00.469) 0:00:10.257 ********* 2026-04-07 01:58:28.804243 | orchestrator | ok: [testbed-manager] 2026-04-07 01:58:28.804250 | orchestrator | 2026-04-07 01:58:28.804257 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-07 01:58:28.804264 | orchestrator | Tuesday 07 April 2026 01:58:22 +0000 (0:00:00.727) 0:00:10.985 ********* 2026-04-07 01:58:28.804271 | orchestrator | ok: [testbed-manager] 2026-04-07 01:58:28.804278 | orchestrator | 2026-04-07 01:58:28.804285 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-07 01:58:28.804292 | orchestrator | Tuesday 07 April 2026 01:58:22 +0000 (0:00:00.443) 0:00:11.428 ********* 2026-04-07 01:58:28.804299 | orchestrator | ok: [testbed-manager] 2026-04-07 01:58:28.804306 | orchestrator | 2026-04-07 01:58:28.804313 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-07 01:58:28.804320 | orchestrator | Tuesday 07 April 2026 01:58:23 +0000 (0:00:00.500) 0:00:11.929 ********* 2026-04-07 01:58:28.804327 | orchestrator | changed: [testbed-manager] 2026-04-07 01:58:28.804334 | orchestrator | 2026-04-07 01:58:28.804341 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-07 01:58:28.804348 | orchestrator | Tuesday 07 April 2026 01:58:24 +0000 (0:00:01.303) 0:00:13.233 ********* 2026-04-07 01:58:28.804355 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-07 01:58:28.804362 | orchestrator | changed: [testbed-manager] 2026-04-07 01:58:28.804369 | orchestrator | 2026-04-07 01:58:28.804376 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-07 01:58:28.804383 | orchestrator | Tuesday 07 April 2026 01:58:25 +0000 (0:00:00.995) 0:00:14.228 ********* 2026-04-07 01:58:28.804390 | orchestrator | changed: [testbed-manager] 2026-04-07 01:58:28.804397 | orchestrator | 2026-04-07 01:58:28.804405 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-07 01:58:28.804412 | orchestrator | Tuesday 07 April 2026 01:58:27 +0000 (0:00:01.863) 0:00:16.092 ********* 2026-04-07 01:58:28.804419 | orchestrator | changed: [testbed-manager] 2026-04-07 01:58:28.804426 | orchestrator | 2026-04-07 01:58:28.804433 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:58:28.804440 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 01:58:28.804448 | orchestrator | 2026-04-07 01:58:28.804455 | orchestrator | 2026-04-07 01:58:28.804462 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:58:28.804470 | orchestrator | Tuesday 07 April 2026 01:58:28 +0000 (0:00:00.998) 0:00:17.091 ********* 2026-04-07 01:58:28.804482 | orchestrator | =============================================================================== 2026-04-07 01:58:28.804489 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.28s 2026-04-07 01:58:28.804497 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.86s 2026-04-07 01:58:28.804503 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.67s 2026-04-07 01:58:28.804510 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.30s 2026-04-07 01:58:28.804517 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.00s 2026-04-07 01:58:28.804524 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.00s 2026-04-07 01:58:28.804531 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.73s 2026-04-07 01:58:28.804539 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.60s 2026-04-07 01:58:28.804547 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.50s 2026-04-07 01:58:28.804555 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2026-04-07 01:58:28.804563 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-04-07 01:58:29.191819 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-07 01:58:29.230315 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-07 01:58:29.230446 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-07 01:58:29.307330 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 195 0 --:--:-- --:--:-- --:--:-- 197 2026-04-07 01:58:29.321413 | orchestrator | + osism apply --environment custom workarounds 2026-04-07 01:58:31.401092 | orchestrator | 2026-04-07 01:58:31 | INFO  | Trying to run play workarounds in environment custom 2026-04-07 01:58:41.521013 | orchestrator | 2026-04-07 01:58:41 | INFO  | Task ca6c71de-4eb7-4fb0-8147-a3a2fb767616 (workarounds) was prepared for execution. 2026-04-07 01:58:41.521165 | orchestrator | 2026-04-07 01:58:41 | INFO  | It takes a moment until task ca6c71de-4eb7-4fb0-8147-a3a2fb767616 (workarounds) has been started and output is visible here. 2026-04-07 01:59:08.243293 | orchestrator | 2026-04-07 01:59:08.243416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 01:59:08.243429 | orchestrator | 2026-04-07 01:59:08.243438 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-07 01:59:08.243447 | orchestrator | Tuesday 07 April 2026 01:58:46 +0000 (0:00:00.136) 0:00:00.136 ********* 2026-04-07 01:59:08.243455 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-07 01:59:08.243464 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-07 01:59:08.243472 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-07 01:59:08.243480 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-07 01:59:08.243488 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-07 01:59:08.243495 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-07 01:59:08.243503 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-07 01:59:08.243510 | orchestrator | 2026-04-07 01:59:08.243519 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-07 01:59:08.243526 | orchestrator | 2026-04-07 01:59:08.243533 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-07 01:59:08.243558 | orchestrator | Tuesday 07 April 2026 01:58:46 +0000 (0:00:00.851) 0:00:00.988 ********* 2026-04-07 01:59:08.243565 | orchestrator | ok: [testbed-manager] 2026-04-07 01:59:08.243587 | orchestrator | 2026-04-07 01:59:08.243624 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-07 01:59:08.243632 | orchestrator | 2026-04-07 01:59:08.243639 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-07 01:59:08.243646 | orchestrator | Tuesday 07 April 2026 01:58:49 +0000 (0:00:02.670) 0:00:03.658 ********* 2026-04-07 01:59:08.243653 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:59:08.243661 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:59:08.243668 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:59:08.243675 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:59:08.243682 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:59:08.243689 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:59:08.243696 | orchestrator | 2026-04-07 01:59:08.243703 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-07 01:59:08.243710 | orchestrator | 2026-04-07 01:59:08.243717 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-07 01:59:08.243732 | orchestrator | Tuesday 07 April 2026 01:58:51 +0000 (0:00:01.845) 0:00:05.503 ********* 2026-04-07 01:59:08.243740 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 01:59:08.243749 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 01:59:08.243756 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 01:59:08.243764 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 01:59:08.243770 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 01:59:08.243777 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-07 01:59:08.243785 | orchestrator | 2026-04-07 01:59:08.243792 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-07 01:59:08.243798 | orchestrator | Tuesday 07 April 2026 01:58:52 +0000 (0:00:01.574) 0:00:07.078 ********* 2026-04-07 01:59:08.243806 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:59:08.243813 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:59:08.243820 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:59:08.243827 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:59:08.243834 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:59:08.243841 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:59:08.243848 | orchestrator | 2026-04-07 01:59:08.243855 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-07 01:59:08.243862 | orchestrator | Tuesday 07 April 2026 01:58:56 +0000 (0:00:03.815) 0:00:10.893 ********* 2026-04-07 01:59:08.243869 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:59:08.243877 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:59:08.243884 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:59:08.243891 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:59:08.243898 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:59:08.243905 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:59:08.243912 | orchestrator | 2026-04-07 01:59:08.243919 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-07 01:59:08.243926 | orchestrator | 2026-04-07 01:59:08.243933 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-07 01:59:08.243941 | orchestrator | Tuesday 07 April 2026 01:58:57 +0000 (0:00:00.764) 0:00:11.658 ********* 2026-04-07 01:59:08.243948 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:59:08.243956 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:59:08.243964 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:59:08.243970 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:59:08.243977 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:59:08.243984 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:59:08.243990 | orchestrator | changed: [testbed-manager] 2026-04-07 01:59:08.244004 | orchestrator | 2026-04-07 01:59:08.244012 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-07 01:59:08.244019 | orchestrator | Tuesday 07 April 2026 01:58:59 +0000 (0:00:01.696) 0:00:13.355 ********* 2026-04-07 01:59:08.244025 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:59:08.244032 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:59:08.244039 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:59:08.244046 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:59:08.244053 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:59:08.244060 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:59:08.244084 | orchestrator | changed: [testbed-manager] 2026-04-07 01:59:08.244088 | orchestrator | 2026-04-07 01:59:08.244093 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-07 01:59:08.244097 | orchestrator | Tuesday 07 April 2026 01:59:00 +0000 (0:00:01.683) 0:00:15.038 ********* 2026-04-07 01:59:08.244102 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:59:08.244106 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:59:08.244110 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:59:08.244115 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:59:08.244119 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:59:08.244123 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:59:08.244128 | orchestrator | ok: [testbed-manager] 2026-04-07 01:59:08.244132 | orchestrator | 2026-04-07 01:59:08.244136 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-07 01:59:08.244141 | orchestrator | Tuesday 07 April 2026 01:59:02 +0000 (0:00:01.743) 0:00:16.782 ********* 2026-04-07 01:59:08.244145 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:59:08.244149 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:59:08.244153 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:59:08.244157 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:59:08.244161 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:59:08.244165 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:59:08.244169 | orchestrator | changed: [testbed-manager] 2026-04-07 01:59:08.244173 | orchestrator | 2026-04-07 01:59:08.244176 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-07 01:59:08.244180 | orchestrator | Tuesday 07 April 2026 01:59:04 +0000 (0:00:01.885) 0:00:18.667 ********* 2026-04-07 01:59:08.244184 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:59:08.244188 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:59:08.244192 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:59:08.244196 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:59:08.244200 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:59:08.244204 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:59:08.244208 | orchestrator | skipping: [testbed-manager] 2026-04-07 01:59:08.244212 | orchestrator | 2026-04-07 01:59:08.244216 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-07 01:59:08.244220 | orchestrator | 2026-04-07 01:59:08.244224 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-07 01:59:08.244228 | orchestrator | Tuesday 07 April 2026 01:59:05 +0000 (0:00:00.658) 0:00:19.326 ********* 2026-04-07 01:59:08.244232 | orchestrator | ok: [testbed-node-3] 2026-04-07 01:59:08.244236 | orchestrator | ok: [testbed-node-1] 2026-04-07 01:59:08.244239 | orchestrator | ok: [testbed-node-2] 2026-04-07 01:59:08.244243 | orchestrator | ok: [testbed-node-4] 2026-04-07 01:59:08.244247 | orchestrator | ok: [testbed-node-5] 2026-04-07 01:59:08.244251 | orchestrator | ok: [testbed-node-0] 2026-04-07 01:59:08.244258 | orchestrator | ok: [testbed-manager] 2026-04-07 01:59:08.244262 | orchestrator | 2026-04-07 01:59:08.244266 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:59:08.244272 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 01:59:08.244278 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:08.244286 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:08.244291 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:08.244294 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:08.244298 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:08.244302 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:08.244306 | orchestrator | 2026-04-07 01:59:08.244310 | orchestrator | 2026-04-07 01:59:08.244314 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:59:08.244318 | orchestrator | Tuesday 07 April 2026 01:59:08 +0000 (0:00:02.994) 0:00:22.320 ********* 2026-04-07 01:59:08.244322 | orchestrator | =============================================================================== 2026-04-07 01:59:08.244326 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.82s 2026-04-07 01:59:08.244330 | orchestrator | Install python3-docker -------------------------------------------------- 2.99s 2026-04-07 01:59:08.244334 | orchestrator | Apply netplan configuration --------------------------------------------- 2.67s 2026-04-07 01:59:08.244338 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.89s 2026-04-07 01:59:08.244342 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2026-04-07 01:59:08.244346 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.74s 2026-04-07 01:59:08.244350 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2026-04-07 01:59:08.244353 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.68s 2026-04-07 01:59:08.244357 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.57s 2026-04-07 01:59:08.244361 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.85s 2026-04-07 01:59:08.244365 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.76s 2026-04-07 01:59:08.244372 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-04-07 01:59:09.061445 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-07 01:59:21.297704 | orchestrator | 2026-04-07 01:59:21 | INFO  | Task a46b8f25-472a-4bc6-951f-2dad702cffca (reboot) was prepared for execution. 2026-04-07 01:59:21.297828 | orchestrator | 2026-04-07 01:59:21 | INFO  | It takes a moment until task a46b8f25-472a-4bc6-951f-2dad702cffca (reboot) has been started and output is visible here. 2026-04-07 01:59:32.102551 | orchestrator | 2026-04-07 01:59:32.102725 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 01:59:32.102746 | orchestrator | 2026-04-07 01:59:32.102759 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 01:59:32.102771 | orchestrator | Tuesday 07 April 2026 01:59:25 +0000 (0:00:00.225) 0:00:00.225 ********* 2026-04-07 01:59:32.102783 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:59:32.102796 | orchestrator | 2026-04-07 01:59:32.102808 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 01:59:32.102819 | orchestrator | Tuesday 07 April 2026 01:59:25 +0000 (0:00:00.116) 0:00:00.342 ********* 2026-04-07 01:59:32.102830 | orchestrator | changed: [testbed-node-0] 2026-04-07 01:59:32.102846 | orchestrator | 2026-04-07 01:59:32.102865 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 01:59:32.102915 | orchestrator | Tuesday 07 April 2026 01:59:26 +0000 (0:00:01.023) 0:00:01.365 ********* 2026-04-07 01:59:32.102934 | orchestrator | skipping: [testbed-node-0] 2026-04-07 01:59:32.102950 | orchestrator | 2026-04-07 01:59:32.102967 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 01:59:32.102984 | orchestrator | 2026-04-07 01:59:32.103001 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 01:59:32.103019 | orchestrator | Tuesday 07 April 2026 01:59:27 +0000 (0:00:00.139) 0:00:01.505 ********* 2026-04-07 01:59:32.103051 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:59:32.103081 | orchestrator | 2026-04-07 01:59:32.103099 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 01:59:32.103118 | orchestrator | Tuesday 07 April 2026 01:59:27 +0000 (0:00:00.111) 0:00:01.616 ********* 2026-04-07 01:59:32.103136 | orchestrator | changed: [testbed-node-1] 2026-04-07 01:59:32.103154 | orchestrator | 2026-04-07 01:59:32.103174 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 01:59:32.103211 | orchestrator | Tuesday 07 April 2026 01:59:27 +0000 (0:00:00.659) 0:00:02.275 ********* 2026-04-07 01:59:32.103232 | orchestrator | skipping: [testbed-node-1] 2026-04-07 01:59:32.103249 | orchestrator | 2026-04-07 01:59:32.103266 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 01:59:32.103284 | orchestrator | 2026-04-07 01:59:32.103302 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 01:59:32.103319 | orchestrator | Tuesday 07 April 2026 01:59:27 +0000 (0:00:00.110) 0:00:02.386 ********* 2026-04-07 01:59:32.103336 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:59:32.103355 | orchestrator | 2026-04-07 01:59:32.103373 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 01:59:32.103391 | orchestrator | Tuesday 07 April 2026 01:59:28 +0000 (0:00:00.232) 0:00:02.618 ********* 2026-04-07 01:59:32.103410 | orchestrator | changed: [testbed-node-2] 2026-04-07 01:59:32.103430 | orchestrator | 2026-04-07 01:59:32.103449 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 01:59:32.103467 | orchestrator | Tuesday 07 April 2026 01:59:28 +0000 (0:00:00.660) 0:00:03.278 ********* 2026-04-07 01:59:32.103487 | orchestrator | skipping: [testbed-node-2] 2026-04-07 01:59:32.103506 | orchestrator | 2026-04-07 01:59:32.103524 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 01:59:32.103543 | orchestrator | 2026-04-07 01:59:32.103554 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 01:59:32.103566 | orchestrator | Tuesday 07 April 2026 01:59:28 +0000 (0:00:00.127) 0:00:03.406 ********* 2026-04-07 01:59:32.103577 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:59:32.103661 | orchestrator | 2026-04-07 01:59:32.103674 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 01:59:32.103685 | orchestrator | Tuesday 07 April 2026 01:59:29 +0000 (0:00:00.109) 0:00:03.516 ********* 2026-04-07 01:59:32.103697 | orchestrator | changed: [testbed-node-3] 2026-04-07 01:59:32.103708 | orchestrator | 2026-04-07 01:59:32.103719 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 01:59:32.103730 | orchestrator | Tuesday 07 April 2026 01:59:29 +0000 (0:00:00.674) 0:00:04.190 ********* 2026-04-07 01:59:32.103741 | orchestrator | skipping: [testbed-node-3] 2026-04-07 01:59:32.103752 | orchestrator | 2026-04-07 01:59:32.103764 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 01:59:32.103775 | orchestrator | 2026-04-07 01:59:32.103786 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 01:59:32.103797 | orchestrator | Tuesday 07 April 2026 01:59:29 +0000 (0:00:00.125) 0:00:04.316 ********* 2026-04-07 01:59:32.103808 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:59:32.103819 | orchestrator | 2026-04-07 01:59:32.103831 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 01:59:32.103843 | orchestrator | Tuesday 07 April 2026 01:59:30 +0000 (0:00:00.143) 0:00:04.460 ********* 2026-04-07 01:59:32.103868 | orchestrator | changed: [testbed-node-4] 2026-04-07 01:59:32.103879 | orchestrator | 2026-04-07 01:59:32.103890 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 01:59:32.103902 | orchestrator | Tuesday 07 April 2026 01:59:30 +0000 (0:00:00.695) 0:00:05.156 ********* 2026-04-07 01:59:32.103913 | orchestrator | skipping: [testbed-node-4] 2026-04-07 01:59:32.103924 | orchestrator | 2026-04-07 01:59:32.103936 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-07 01:59:32.103947 | orchestrator | 2026-04-07 01:59:32.103959 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-07 01:59:32.103970 | orchestrator | Tuesday 07 April 2026 01:59:30 +0000 (0:00:00.116) 0:00:05.272 ********* 2026-04-07 01:59:32.103981 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:59:32.103992 | orchestrator | 2026-04-07 01:59:32.104003 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-07 01:59:32.104014 | orchestrator | Tuesday 07 April 2026 01:59:30 +0000 (0:00:00.120) 0:00:05.393 ********* 2026-04-07 01:59:32.104026 | orchestrator | changed: [testbed-node-5] 2026-04-07 01:59:32.104037 | orchestrator | 2026-04-07 01:59:32.104048 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-07 01:59:32.104059 | orchestrator | Tuesday 07 April 2026 01:59:31 +0000 (0:00:00.696) 0:00:06.089 ********* 2026-04-07 01:59:32.104093 | orchestrator | skipping: [testbed-node-5] 2026-04-07 01:59:32.104106 | orchestrator | 2026-04-07 01:59:32.104117 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 01:59:32.104129 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:32.104142 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:32.104153 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:32.104164 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:32.104176 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:32.104187 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 01:59:32.104198 | orchestrator | 2026-04-07 01:59:32.104210 | orchestrator | 2026-04-07 01:59:32.104221 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 01:59:32.104232 | orchestrator | Tuesday 07 April 2026 01:59:31 +0000 (0:00:00.045) 0:00:06.135 ********* 2026-04-07 01:59:32.104252 | orchestrator | =============================================================================== 2026-04-07 01:59:32.104263 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.41s 2026-04-07 01:59:32.104275 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.83s 2026-04-07 01:59:32.104286 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2026-04-07 01:59:32.495938 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-07 01:59:44.833850 | orchestrator | 2026-04-07 01:59:44 | INFO  | Task 68c0728c-f31f-4b6f-b1a0-dd754cc92110 (wait-for-connection) was prepared for execution. 2026-04-07 01:59:44.834000 | orchestrator | 2026-04-07 01:59:44 | INFO  | It takes a moment until task 68c0728c-f31f-4b6f-b1a0-dd754cc92110 (wait-for-connection) has been started and output is visible here. 2026-04-07 02:00:01.493448 | orchestrator | 2026-04-07 02:00:01.493548 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-07 02:00:01.493579 | orchestrator | 2026-04-07 02:00:01.493586 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-07 02:00:01.493593 | orchestrator | Tuesday 07 April 2026 01:59:49 +0000 (0:00:00.249) 0:00:00.249 ********* 2026-04-07 02:00:01.493599 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:00:01.493606 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:00:01.493612 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:00:01.493618 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:00:01.493675 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:00:01.493684 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:00:01.493690 | orchestrator | 2026-04-07 02:00:01.493697 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:00:01.493704 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:00:01.493712 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:00:01.493718 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:00:01.493725 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:00:01.493731 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:00:01.493737 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:00:01.493743 | orchestrator | 2026-04-07 02:00:01.493750 | orchestrator | 2026-04-07 02:00:01.493756 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:00:01.493762 | orchestrator | Tuesday 07 April 2026 02:00:01 +0000 (0:00:11.560) 0:00:11.809 ********* 2026-04-07 02:00:01.493768 | orchestrator | =============================================================================== 2026-04-07 02:00:01.493774 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2026-04-07 02:00:01.872489 | orchestrator | + osism apply hddtemp 2026-04-07 02:00:14.294708 | orchestrator | 2026-04-07 02:00:14 | INFO  | Task b34a98db-9ec4-4b62-9a5a-9072a6a45826 (hddtemp) was prepared for execution. 2026-04-07 02:00:14.294791 | orchestrator | 2026-04-07 02:00:14 | INFO  | It takes a moment until task b34a98db-9ec4-4b62-9a5a-9072a6a45826 (hddtemp) has been started and output is visible here. 2026-04-07 02:00:44.026588 | orchestrator | 2026-04-07 02:00:44.026717 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-07 02:00:44.026730 | orchestrator | 2026-04-07 02:00:44.026738 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-07 02:00:44.026746 | orchestrator | Tuesday 07 April 2026 02:00:19 +0000 (0:00:00.354) 0:00:00.354 ********* 2026-04-07 02:00:44.026754 | orchestrator | ok: [testbed-manager] 2026-04-07 02:00:44.026762 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:00:44.026769 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:00:44.026776 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:00:44.026783 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:00:44.026790 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:00:44.026797 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:00:44.026804 | orchestrator | 2026-04-07 02:00:44.026811 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-07 02:00:44.026818 | orchestrator | Tuesday 07 April 2026 02:00:20 +0000 (0:00:00.847) 0:00:01.201 ********* 2026-04-07 02:00:44.026827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:00:44.026854 | orchestrator | 2026-04-07 02:00:44.026862 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-07 02:00:44.026869 | orchestrator | Tuesday 07 April 2026 02:00:21 +0000 (0:00:01.343) 0:00:02.545 ********* 2026-04-07 02:00:44.026876 | orchestrator | ok: [testbed-manager] 2026-04-07 02:00:44.026883 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:00:44.026890 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:00:44.026896 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:00:44.026903 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:00:44.026911 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:00:44.026918 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:00:44.026925 | orchestrator | 2026-04-07 02:00:44.026932 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-07 02:00:44.026951 | orchestrator | Tuesday 07 April 2026 02:00:23 +0000 (0:00:01.983) 0:00:04.528 ********* 2026-04-07 02:00:44.026959 | orchestrator | changed: [testbed-manager] 2026-04-07 02:00:44.026966 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:00:44.026973 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:00:44.026980 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:00:44.026987 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:00:44.026994 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:00:44.027001 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:00:44.027008 | orchestrator | 2026-04-07 02:00:44.027015 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-07 02:00:44.027022 | orchestrator | Tuesday 07 April 2026 02:00:25 +0000 (0:00:01.344) 0:00:05.873 ********* 2026-04-07 02:00:44.027029 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:00:44.027036 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:00:44.027043 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:00:44.027050 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:00:44.027057 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:00:44.027064 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:00:44.027070 | orchestrator | ok: [testbed-manager] 2026-04-07 02:00:44.027077 | orchestrator | 2026-04-07 02:00:44.027094 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-07 02:00:44.027101 | orchestrator | Tuesday 07 April 2026 02:00:26 +0000 (0:00:01.321) 0:00:07.195 ********* 2026-04-07 02:00:44.027108 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:00:44.027115 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:00:44.027122 | orchestrator | changed: [testbed-manager] 2026-04-07 02:00:44.027129 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:00:44.027136 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:00:44.027143 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:00:44.027151 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:00:44.027159 | orchestrator | 2026-04-07 02:00:44.027168 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-07 02:00:44.027176 | orchestrator | Tuesday 07 April 2026 02:00:27 +0000 (0:00:00.914) 0:00:08.109 ********* 2026-04-07 02:00:44.027184 | orchestrator | changed: [testbed-manager] 2026-04-07 02:00:44.027192 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:00:44.027200 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:00:44.027208 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:00:44.027217 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:00:44.027225 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:00:44.027234 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:00:44.027242 | orchestrator | 2026-04-07 02:00:44.027250 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-07 02:00:44.027259 | orchestrator | Tuesday 07 April 2026 02:00:39 +0000 (0:00:12.683) 0:00:20.793 ********* 2026-04-07 02:00:44.027267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:00:44.027276 | orchestrator | 2026-04-07 02:00:44.027289 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-07 02:00:44.027296 | orchestrator | Tuesday 07 April 2026 02:00:41 +0000 (0:00:01.606) 0:00:22.399 ********* 2026-04-07 02:00:44.027303 | orchestrator | changed: [testbed-manager] 2026-04-07 02:00:44.027310 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:00:44.027317 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:00:44.027324 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:00:44.027331 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:00:44.027338 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:00:44.027345 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:00:44.027352 | orchestrator | 2026-04-07 02:00:44.027359 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:00:44.027366 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:00:44.027388 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:00:44.027397 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:00:44.027404 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:00:44.027411 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:00:44.027418 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:00:44.027424 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:00:44.027431 | orchestrator | 2026-04-07 02:00:44.027438 | orchestrator | 2026-04-07 02:00:44.027445 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:00:44.027452 | orchestrator | Tuesday 07 April 2026 02:00:43 +0000 (0:00:01.986) 0:00:24.386 ********* 2026-04-07 02:00:44.027459 | orchestrator | =============================================================================== 2026-04-07 02:00:44.027466 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.68s 2026-04-07 02:00:44.027473 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.99s 2026-04-07 02:00:44.027480 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.98s 2026-04-07 02:00:44.027491 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.61s 2026-04-07 02:00:44.027498 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.34s 2026-04-07 02:00:44.027505 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.34s 2026-04-07 02:00:44.027512 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.32s 2026-04-07 02:00:44.027519 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.91s 2026-04-07 02:00:44.027535 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.85s 2026-04-07 02:00:44.480029 | orchestrator | ++ semver 9.5.0 7.1.1 2026-04-07 02:00:44.524570 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 02:00:44.524654 | orchestrator | + sudo systemctl restart manager.service 2026-04-07 02:00:58.654102 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-07 02:00:58.654200 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-07 02:00:58.654213 | orchestrator | + local max_attempts=60 2026-04-07 02:00:58.654223 | orchestrator | + local name=ceph-ansible 2026-04-07 02:00:58.654231 | orchestrator | + local attempt_num=1 2026-04-07 02:00:58.654241 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:00:58.691101 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:00:58.691187 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:00:58.691193 | orchestrator | + sleep 5 2026-04-07 02:01:03.696357 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:03.730313 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:03.730400 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:03.730413 | orchestrator | + sleep 5 2026-04-07 02:01:08.734008 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:08.760362 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:08.760448 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:08.760460 | orchestrator | + sleep 5 2026-04-07 02:01:13.764149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:13.796183 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:13.796270 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:13.796280 | orchestrator | + sleep 5 2026-04-07 02:01:18.801348 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:18.849996 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:18.850145 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:18.850161 | orchestrator | + sleep 5 2026-04-07 02:01:23.854522 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:23.887947 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:23.888047 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:23.888063 | orchestrator | + sleep 5 2026-04-07 02:01:28.893681 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:28.935084 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:28.935179 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:28.935202 | orchestrator | + sleep 5 2026-04-07 02:01:33.941807 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:33.991408 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:33.991477 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:33.991483 | orchestrator | + sleep 5 2026-04-07 02:01:38.997255 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:39.029196 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:39.029291 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:39.029303 | orchestrator | + sleep 5 2026-04-07 02:01:44.033290 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:44.078254 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:44.078343 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:44.078355 | orchestrator | + sleep 5 2026-04-07 02:01:49.083588 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:49.122921 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:49.123004 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:49.123014 | orchestrator | + sleep 5 2026-04-07 02:01:54.128609 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:54.176956 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:54.177048 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:54.177060 | orchestrator | + sleep 5 2026-04-07 02:01:59.182536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:01:59.227006 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-07 02:01:59.227142 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-07 02:01:59.227170 | orchestrator | + sleep 5 2026-04-07 02:02:04.231883 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-07 02:02:04.273899 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:02:04.273994 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-07 02:02:04.274009 | orchestrator | + local max_attempts=60 2026-04-07 02:02:04.274064 | orchestrator | + local name=kolla-ansible 2026-04-07 02:02:04.274070 | orchestrator | + local attempt_num=1 2026-04-07 02:02:04.274223 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-07 02:02:04.314240 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:02:04.314307 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-07 02:02:04.314314 | orchestrator | + local max_attempts=60 2026-04-07 02:02:04.314340 | orchestrator | + local name=osism-ansible 2026-04-07 02:02:04.314347 | orchestrator | + local attempt_num=1 2026-04-07 02:02:04.314555 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-07 02:02:04.345072 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 02:02:04.345135 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-07 02:02:04.345141 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-07 02:02:04.517026 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-07 02:02:04.682389 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-07 02:02:04.844137 | orchestrator | ARA in osism-ansible already disabled. 2026-04-07 02:02:05.009902 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-07 02:02:05.010117 | orchestrator | + osism apply gather-facts 2026-04-07 02:02:17.295567 | orchestrator | 2026-04-07 02:02:17 | INFO  | Task 0ff05abc-2ba9-4115-8f82-676fa9eb991c (gather-facts) was prepared for execution. 2026-04-07 02:02:17.295670 | orchestrator | 2026-04-07 02:02:17 | INFO  | It takes a moment until task 0ff05abc-2ba9-4115-8f82-676fa9eb991c (gather-facts) has been started and output is visible here. 2026-04-07 02:02:31.412102 | orchestrator | 2026-04-07 02:02:31.412247 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 02:02:31.412276 | orchestrator | 2026-04-07 02:02:31.412294 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 02:02:31.412314 | orchestrator | Tuesday 07 April 2026 02:02:21 +0000 (0:00:00.250) 0:00:00.250 ********* 2026-04-07 02:02:31.412331 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:02:31.412349 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:02:31.412367 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:02:31.412387 | orchestrator | ok: [testbed-manager] 2026-04-07 02:02:31.412406 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:02:31.412424 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:02:31.412442 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:02:31.412461 | orchestrator | 2026-04-07 02:02:31.412481 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-07 02:02:31.412501 | orchestrator | 2026-04-07 02:02:31.412522 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-07 02:02:31.412540 | orchestrator | Tuesday 07 April 2026 02:02:30 +0000 (0:00:08.490) 0:00:08.740 ********* 2026-04-07 02:02:31.412558 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:02:31.412577 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:02:31.412595 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:02:31.412613 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:02:31.412632 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:02:31.412651 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:02:31.412669 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:02:31.412727 | orchestrator | 2026-04-07 02:02:31.412749 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:02:31.412769 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:02:31.412787 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:02:31.412835 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:02:31.412853 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:02:31.412886 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:02:31.412907 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:02:31.412924 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:02:31.412978 | orchestrator | 2026-04-07 02:02:31.412997 | orchestrator | 2026-04-07 02:02:31.413016 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:02:31.413033 | orchestrator | Tuesday 07 April 2026 02:02:30 +0000 (0:00:00.620) 0:00:09.361 ********* 2026-04-07 02:02:31.413051 | orchestrator | =============================================================================== 2026-04-07 02:02:31.413069 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.49s 2026-04-07 02:02:31.413088 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-04-07 02:02:31.748171 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-07 02:02:31.760269 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-07 02:02:31.773448 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-07 02:02:31.796632 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-07 02:02:31.815346 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-07 02:02:31.835120 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-07 02:02:31.852841 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-07 02:02:31.874520 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-07 02:02:31.893690 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-07 02:02:31.911640 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-07 02:02:31.924843 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-07 02:02:31.941027 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-07 02:02:31.963949 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-07 02:02:31.984712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-07 02:02:32.007992 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-07 02:02:32.029290 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-07 02:02:32.047392 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-07 02:02:32.064678 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-07 02:02:32.080961 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-07 02:02:32.100551 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-07 02:02:32.111419 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-07 02:02:32.131224 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-07 02:02:32.153583 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-07 02:02:32.172401 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-07 02:02:32.568163 | orchestrator | ok: Runtime: 0:25:25.470505 2026-04-07 02:02:32.673438 | 2026-04-07 02:02:32.673573 | TASK [Deploy services] 2026-04-07 02:02:33.384563 | orchestrator | 2026-04-07 02:02:33.384741 | orchestrator | # DEPLOY SERVICES 2026-04-07 02:02:33.384765 | orchestrator | 2026-04-07 02:02:33.384776 | orchestrator | + set -e 2026-04-07 02:02:33.384785 | orchestrator | + echo 2026-04-07 02:02:33.384811 | orchestrator | + echo '# DEPLOY SERVICES' 2026-04-07 02:02:33.384822 | orchestrator | + echo 2026-04-07 02:02:33.384851 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 02:02:33.384866 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 02:02:33.384877 | orchestrator | ++ INTERACTIVE=false 2026-04-07 02:02:33.384884 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 02:02:33.384894 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 02:02:33.384899 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 02:02:33.384906 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 02:02:33.384911 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 02:02:33.384918 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 02:02:33.384923 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 02:02:33.384930 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 02:02:33.384935 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 02:02:33.384943 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 02:02:33.384947 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 02:02:33.384952 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 02:02:33.384957 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 02:02:33.384962 | orchestrator | ++ export ARA=false 2026-04-07 02:02:33.384966 | orchestrator | ++ ARA=false 2026-04-07 02:02:33.384971 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 02:02:33.384975 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 02:02:33.384980 | orchestrator | ++ export TEMPEST=false 2026-04-07 02:02:33.384984 | orchestrator | ++ TEMPEST=false 2026-04-07 02:02:33.384988 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 02:02:33.384993 | orchestrator | ++ IS_ZUUL=true 2026-04-07 02:02:33.384997 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 02:02:33.385002 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 02:02:33.385017 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 02:02:33.385022 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 02:02:33.385026 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 02:02:33.385031 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 02:02:33.385035 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 02:02:33.385039 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 02:02:33.385044 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 02:02:33.385053 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 02:02:33.385058 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-07 02:02:33.397647 | orchestrator | 2026-04-07 02:02:33.397731 | orchestrator | # PULL IMAGES 2026-04-07 02:02:33.397772 | orchestrator | 2026-04-07 02:02:33.397782 | orchestrator | + set -e 2026-04-07 02:02:33.397789 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 02:02:33.397822 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 02:02:33.397831 | orchestrator | ++ INTERACTIVE=false 2026-04-07 02:02:33.397839 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 02:02:33.397848 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 02:02:33.397853 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 02:02:33.397858 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 02:02:33.397863 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 02:02:33.397868 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 02:02:33.397872 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 02:02:33.397877 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 02:02:33.397882 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 02:02:33.397886 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 02:02:33.397891 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 02:02:33.397896 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 02:02:33.397901 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 02:02:33.397905 | orchestrator | ++ export ARA=false 2026-04-07 02:02:33.397910 | orchestrator | ++ ARA=false 2026-04-07 02:02:33.397917 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 02:02:33.397922 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 02:02:33.397927 | orchestrator | ++ export TEMPEST=false 2026-04-07 02:02:33.397931 | orchestrator | ++ TEMPEST=false 2026-04-07 02:02:33.397936 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 02:02:33.397940 | orchestrator | ++ IS_ZUUL=true 2026-04-07 02:02:33.397945 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 02:02:33.397950 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 02:02:33.397955 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 02:02:33.397959 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 02:02:33.397964 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 02:02:33.397968 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 02:02:33.397993 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 02:02:33.397998 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 02:02:33.398003 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 02:02:33.398007 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 02:02:33.398012 | orchestrator | + echo 2026-04-07 02:02:33.398039 | orchestrator | + echo '# PULL IMAGES' 2026-04-07 02:02:33.398046 | orchestrator | + echo 2026-04-07 02:02:33.398065 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-07 02:02:33.461070 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 02:02:33.461143 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-07 02:02:35.527086 | orchestrator | 2026-04-07 02:02:35 | INFO  | Trying to run play pull-images in environment custom 2026-04-07 02:02:45.694133 | orchestrator | 2026-04-07 02:02:45 | INFO  | Task 0b1642ce-bb22-4322-8324-33caa5ab6b90 (pull-images) was prepared for execution. 2026-04-07 02:02:45.694273 | orchestrator | 2026-04-07 02:02:45 | INFO  | Task 0b1642ce-bb22-4322-8324-33caa5ab6b90 is running in background. No more output. Check ARA for logs. 2026-04-07 02:02:46.088091 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-04-07 02:02:58.507575 | orchestrator | 2026-04-07 02:02:58 | INFO  | Task e8ff1a8a-3494-43f4-8572-b9f713fd9e2d (cgit) was prepared for execution. 2026-04-07 02:02:58.507719 | orchestrator | 2026-04-07 02:02:58 | INFO  | Task e8ff1a8a-3494-43f4-8572-b9f713fd9e2d is running in background. No more output. Check ARA for logs. 2026-04-07 02:03:11.427128 | orchestrator | 2026-04-07 02:03:11 | INFO  | Task dfcefff5-7dbb-4ae6-a5c2-760e29e1835f (dotfiles) was prepared for execution. 2026-04-07 02:03:11.427230 | orchestrator | 2026-04-07 02:03:11 | INFO  | Task dfcefff5-7dbb-4ae6-a5c2-760e29e1835f is running in background. No more output. Check ARA for logs. 2026-04-07 02:03:24.511190 | orchestrator | 2026-04-07 02:03:24 | INFO  | Task d8200f6f-13bb-43e4-8e5f-aae975ff308d (homer) was prepared for execution. 2026-04-07 02:03:24.511295 | orchestrator | 2026-04-07 02:03:24 | INFO  | Task d8200f6f-13bb-43e4-8e5f-aae975ff308d is running in background. No more output. Check ARA for logs. 2026-04-07 02:03:37.449346 | orchestrator | 2026-04-07 02:03:37 | INFO  | Task 9dc3e68c-4d5f-42c0-9785-ac0aa7e6bd35 (phpmyadmin) was prepared for execution. 2026-04-07 02:03:37.449469 | orchestrator | 2026-04-07 02:03:37 | INFO  | Task 9dc3e68c-4d5f-42c0-9785-ac0aa7e6bd35 is running in background. No more output. Check ARA for logs. 2026-04-07 02:03:50.172662 | orchestrator | 2026-04-07 02:03:50 | INFO  | Task b79ade0a-4578-4952-a111-9038625fa74e (sosreport) was prepared for execution. 2026-04-07 02:03:50.172797 | orchestrator | 2026-04-07 02:03:50 | INFO  | Task b79ade0a-4578-4952-a111-9038625fa74e is running in background. No more output. Check ARA for logs. 2026-04-07 02:03:50.572550 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-04-07 02:03:50.581707 | orchestrator | + set -e 2026-04-07 02:03:50.581775 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 02:03:50.581783 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 02:03:50.581789 | orchestrator | ++ INTERACTIVE=false 2026-04-07 02:03:50.581796 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 02:03:50.581801 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 02:03:50.581806 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 02:03:50.581811 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 02:03:50.581816 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 02:03:50.581820 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 02:03:50.581825 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 02:03:50.581830 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 02:03:50.581835 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 02:03:50.581839 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 02:03:50.581844 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 02:03:50.581849 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 02:03:50.581854 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 02:03:50.581858 | orchestrator | ++ export ARA=false 2026-04-07 02:03:50.581892 | orchestrator | ++ ARA=false 2026-04-07 02:03:50.581897 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 02:03:50.581923 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 02:03:50.581928 | orchestrator | ++ export TEMPEST=false 2026-04-07 02:03:50.581933 | orchestrator | ++ TEMPEST=false 2026-04-07 02:03:50.581938 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 02:03:50.581942 | orchestrator | ++ IS_ZUUL=true 2026-04-07 02:03:50.581957 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 02:03:50.581966 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 02:03:50.581971 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 02:03:50.581975 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 02:03:50.581980 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 02:03:50.581984 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 02:03:50.581989 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 02:03:50.581993 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 02:03:50.582142 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 02:03:50.582219 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 02:03:50.582795 | orchestrator | ++ semver 9.5.0 8.0.3 2026-04-07 02:03:50.665384 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 02:03:50.665465 | orchestrator | + osism apply frr 2026-04-07 02:04:02.935158 | orchestrator | 2026-04-07 02:04:02 | INFO  | Task 9e7be908-ccad-4365-9184-fe9bbc54b01e (frr) was prepared for execution. 2026-04-07 02:04:02.935246 | orchestrator | 2026-04-07 02:04:02 | INFO  | It takes a moment until task 9e7be908-ccad-4365-9184-fe9bbc54b01e (frr) has been started and output is visible here. 2026-04-07 02:04:44.683441 | orchestrator | 2026-04-07 02:04:44.683525 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-07 02:04:44.683535 | orchestrator | 2026-04-07 02:04:44.683541 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-07 02:04:44.683551 | orchestrator | Tuesday 07 April 2026 02:04:09 +0000 (0:00:00.491) 0:00:00.491 ********* 2026-04-07 02:04:44.683557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 02:04:44.683564 | orchestrator | 2026-04-07 02:04:44.683570 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-07 02:04:44.683576 | orchestrator | Tuesday 07 April 2026 02:04:10 +0000 (0:00:00.388) 0:00:00.880 ********* 2026-04-07 02:04:44.683581 | orchestrator | changed: [testbed-manager] 2026-04-07 02:04:44.683588 | orchestrator | 2026-04-07 02:04:44.683593 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-07 02:04:44.683601 | orchestrator | Tuesday 07 April 2026 02:04:13 +0000 (0:00:02.801) 0:00:03.681 ********* 2026-04-07 02:04:44.683606 | orchestrator | changed: [testbed-manager] 2026-04-07 02:04:44.683612 | orchestrator | 2026-04-07 02:04:44.683617 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-07 02:04:44.683623 | orchestrator | Tuesday 07 April 2026 02:04:29 +0000 (0:00:16.154) 0:00:19.836 ********* 2026-04-07 02:04:44.683628 | orchestrator | ok: [testbed-manager] 2026-04-07 02:04:44.683635 | orchestrator | 2026-04-07 02:04:44.683641 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-07 02:04:44.683651 | orchestrator | Tuesday 07 April 2026 02:04:30 +0000 (0:00:01.367) 0:00:21.203 ********* 2026-04-07 02:04:44.683660 | orchestrator | changed: [testbed-manager] 2026-04-07 02:04:44.683669 | orchestrator | 2026-04-07 02:04:44.683677 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-07 02:04:44.683686 | orchestrator | Tuesday 07 April 2026 02:04:32 +0000 (0:00:01.540) 0:00:22.743 ********* 2026-04-07 02:04:44.683694 | orchestrator | ok: [testbed-manager] 2026-04-07 02:04:44.683701 | orchestrator | 2026-04-07 02:04:44.683709 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-07 02:04:44.683718 | orchestrator | Tuesday 07 April 2026 02:04:34 +0000 (0:00:02.042) 0:00:24.786 ********* 2026-04-07 02:04:44.683726 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:04:44.683735 | orchestrator | 2026-04-07 02:04:44.683743 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-07 02:04:44.683752 | orchestrator | Tuesday 07 April 2026 02:04:34 +0000 (0:00:00.440) 0:00:25.227 ********* 2026-04-07 02:04:44.683783 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:04:44.683793 | orchestrator | 2026-04-07 02:04:44.683802 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-07 02:04:44.683810 | orchestrator | Tuesday 07 April 2026 02:04:34 +0000 (0:00:00.248) 0:00:25.475 ********* 2026-04-07 02:04:44.683818 | orchestrator | changed: [testbed-manager] 2026-04-07 02:04:44.683827 | orchestrator | 2026-04-07 02:04:44.683835 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-07 02:04:44.683844 | orchestrator | Tuesday 07 April 2026 02:04:35 +0000 (0:00:01.100) 0:00:26.575 ********* 2026-04-07 02:04:44.683853 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-07 02:04:44.683862 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-07 02:04:44.683874 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-07 02:04:44.683883 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-07 02:04:44.683892 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-07 02:04:44.683901 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-07 02:04:44.683999 | orchestrator | 2026-04-07 02:04:44.684013 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-07 02:04:44.684023 | orchestrator | Tuesday 07 April 2026 02:04:40 +0000 (0:00:04.791) 0:00:31.367 ********* 2026-04-07 02:04:44.684033 | orchestrator | ok: [testbed-manager] 2026-04-07 02:04:44.684043 | orchestrator | 2026-04-07 02:04:44.684054 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-07 02:04:44.684064 | orchestrator | Tuesday 07 April 2026 02:04:42 +0000 (0:00:01.947) 0:00:33.315 ********* 2026-04-07 02:04:44.684074 | orchestrator | changed: [testbed-manager] 2026-04-07 02:04:44.684081 | orchestrator | 2026-04-07 02:04:44.684088 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:04:44.684096 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:04:44.684102 | orchestrator | 2026-04-07 02:04:44.684109 | orchestrator | 2026-04-07 02:04:44.684122 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:04:44.684129 | orchestrator | Tuesday 07 April 2026 02:04:44 +0000 (0:00:01.582) 0:00:34.899 ********* 2026-04-07 02:04:44.684136 | orchestrator | =============================================================================== 2026-04-07 02:04:44.684143 | orchestrator | osism.services.frr : Install frr package ------------------------------- 16.15s 2026-04-07 02:04:44.684149 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 4.79s 2026-04-07 02:04:44.684156 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.80s 2026-04-07 02:04:44.684162 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.04s 2026-04-07 02:04:44.684168 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.95s 2026-04-07 02:04:44.684190 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.58s 2026-04-07 02:04:44.684196 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.54s 2026-04-07 02:04:44.684201 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.37s 2026-04-07 02:04:44.684206 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.10s 2026-04-07 02:04:44.684212 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.44s 2026-04-07 02:04:44.684217 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.39s 2026-04-07 02:04:44.684223 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.25s 2026-04-07 02:04:45.040632 | orchestrator | + osism apply kubernetes 2026-04-07 02:04:47.198993 | orchestrator | 2026-04-07 02:04:47 | INFO  | Task ce6a8653-8b4a-45da-8ff6-f0ea5e8dded8 (kubernetes) was prepared for execution. 2026-04-07 02:04:47.199067 | orchestrator | 2026-04-07 02:04:47 | INFO  | It takes a moment until task ce6a8653-8b4a-45da-8ff6-f0ea5e8dded8 (kubernetes) has been started and output is visible here. 2026-04-07 02:05:14.987849 | orchestrator | 2026-04-07 02:05:14.988023 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-07 02:05:14.988044 | orchestrator | 2026-04-07 02:05:14.988056 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-07 02:05:14.988069 | orchestrator | Tuesday 07 April 2026 02:04:53 +0000 (0:00:00.241) 0:00:00.241 ********* 2026-04-07 02:05:14.988086 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:05:14.988107 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:05:14.988125 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:05:14.988146 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:05:14.988164 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:05:14.988183 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:05:14.988202 | orchestrator | 2026-04-07 02:05:14.988221 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-07 02:05:14.988242 | orchestrator | Tuesday 07 April 2026 02:04:54 +0000 (0:00:01.278) 0:00:01.519 ********* 2026-04-07 02:05:14.988261 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.988279 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.988297 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.988318 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.988339 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.988361 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.988383 | orchestrator | 2026-04-07 02:05:14.988403 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-07 02:05:14.988426 | orchestrator | Tuesday 07 April 2026 02:04:55 +0000 (0:00:00.724) 0:00:02.244 ********* 2026-04-07 02:05:14.988446 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.988468 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.988487 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.988506 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.988528 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.988552 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.988573 | orchestrator | 2026-04-07 02:05:14.988595 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-07 02:05:14.988617 | orchestrator | Tuesday 07 April 2026 02:04:56 +0000 (0:00:00.846) 0:00:03.091 ********* 2026-04-07 02:05:14.988639 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:05:14.988662 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:05:14.988677 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:05:14.988696 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:05:14.988710 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:05:14.988721 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:05:14.988732 | orchestrator | 2026-04-07 02:05:14.988743 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-07 02:05:14.988755 | orchestrator | Tuesday 07 April 2026 02:04:58 +0000 (0:00:02.029) 0:00:05.120 ********* 2026-04-07 02:05:14.988766 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:05:14.988777 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:05:14.988787 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:05:14.988798 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:05:14.988809 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:05:14.988819 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:05:14.988830 | orchestrator | 2026-04-07 02:05:14.988841 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-07 02:05:14.988852 | orchestrator | Tuesday 07 April 2026 02:04:59 +0000 (0:00:01.108) 0:00:06.228 ********* 2026-04-07 02:05:14.988862 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:05:14.988897 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:05:14.988908 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:05:14.988955 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:05:14.988970 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:05:14.988981 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:05:14.988991 | orchestrator | 2026-04-07 02:05:14.989013 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-07 02:05:14.989024 | orchestrator | Tuesday 07 April 2026 02:05:01 +0000 (0:00:01.980) 0:00:08.209 ********* 2026-04-07 02:05:14.989035 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.989045 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.989056 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.989067 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.989078 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.989088 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.989099 | orchestrator | 2026-04-07 02:05:14.989110 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-07 02:05:14.989121 | orchestrator | Tuesday 07 April 2026 02:05:01 +0000 (0:00:00.820) 0:00:09.029 ********* 2026-04-07 02:05:14.989132 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.989143 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.989153 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.989164 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.989175 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.989186 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.989196 | orchestrator | 2026-04-07 02:05:14.989207 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-07 02:05:14.989218 | orchestrator | Tuesday 07 April 2026 02:05:02 +0000 (0:00:00.847) 0:00:09.877 ********* 2026-04-07 02:05:14.989229 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 02:05:14.989240 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 02:05:14.989251 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.989262 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 02:05:14.989273 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 02:05:14.989284 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.989295 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 02:05:14.989306 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 02:05:14.989316 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.989327 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 02:05:14.989361 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 02:05:14.989373 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.989384 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 02:05:14.989395 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 02:05:14.989406 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.989417 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 02:05:14.989427 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 02:05:14.989438 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.989449 | orchestrator | 2026-04-07 02:05:14.989460 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-07 02:05:14.989470 | orchestrator | Tuesday 07 April 2026 02:05:03 +0000 (0:00:00.650) 0:00:10.528 ********* 2026-04-07 02:05:14.989481 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.989492 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.989503 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.989522 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.989533 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.989544 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.989560 | orchestrator | 2026-04-07 02:05:14.989578 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-07 02:05:14.989597 | orchestrator | Tuesday 07 April 2026 02:05:04 +0000 (0:00:01.501) 0:00:12.029 ********* 2026-04-07 02:05:14.989616 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:05:14.989635 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:05:14.989652 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:05:14.989669 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:05:14.989680 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:05:14.989690 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:05:14.989701 | orchestrator | 2026-04-07 02:05:14.989711 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-07 02:05:14.989722 | orchestrator | Tuesday 07 April 2026 02:05:05 +0000 (0:00:00.821) 0:00:12.851 ********* 2026-04-07 02:05:14.989733 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:05:14.989744 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:05:14.989754 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:05:14.989765 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:05:14.989776 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:05:14.989786 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:05:14.989797 | orchestrator | 2026-04-07 02:05:14.989808 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-07 02:05:14.989819 | orchestrator | Tuesday 07 April 2026 02:05:10 +0000 (0:00:05.058) 0:00:17.910 ********* 2026-04-07 02:05:14.989829 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.989846 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.989857 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.989867 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.989878 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.989889 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.989899 | orchestrator | 2026-04-07 02:05:14.989934 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-07 02:05:14.989948 | orchestrator | Tuesday 07 April 2026 02:05:11 +0000 (0:00:01.008) 0:00:18.918 ********* 2026-04-07 02:05:14.989959 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.989981 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.989992 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.990003 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.990014 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.990126 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.990147 | orchestrator | 2026-04-07 02:05:14.990166 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-07 02:05:14.990196 | orchestrator | Tuesday 07 April 2026 02:05:13 +0000 (0:00:01.451) 0:00:20.370 ********* 2026-04-07 02:05:14.990304 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.990322 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.990333 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.990344 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.990355 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.990365 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.990376 | orchestrator | 2026-04-07 02:05:14.990387 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-07 02:05:14.990398 | orchestrator | Tuesday 07 April 2026 02:05:13 +0000 (0:00:00.693) 0:00:21.064 ********* 2026-04-07 02:05:14.990419 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-07 02:05:14.990437 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-07 02:05:14.990448 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:05:14.990459 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-07 02:05:14.990482 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-07 02:05:14.990492 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-07 02:05:14.990503 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-07 02:05:14.990514 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:05:14.990525 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-07 02:05:14.990536 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-07 02:05:14.990546 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:05:14.990557 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-07 02:05:14.990568 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-07 02:05:14.990579 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:05:14.990589 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:05:14.990600 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-07 02:05:14.990611 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-07 02:05:14.990622 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:05:14.990633 | orchestrator | 2026-04-07 02:05:14.990644 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-07 02:05:14.990670 | orchestrator | Tuesday 07 April 2026 02:05:14 +0000 (0:00:00.979) 0:00:22.044 ********* 2026-04-07 02:06:32.058468 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:06:32.058553 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:06:32.058563 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:06:32.058571 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:06:32.058578 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.058585 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.058592 | orchestrator | 2026-04-07 02:06:32.058600 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-07 02:06:32.058608 | orchestrator | Tuesday 07 April 2026 02:05:15 +0000 (0:00:00.641) 0:00:22.686 ********* 2026-04-07 02:06:32.058615 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:06:32.058621 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:06:32.058628 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:06:32.058634 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:06:32.058641 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.058648 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.058654 | orchestrator | 2026-04-07 02:06:32.058661 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-07 02:06:32.058667 | orchestrator | 2026-04-07 02:06:32.058674 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-07 02:06:32.058682 | orchestrator | Tuesday 07 April 2026 02:05:16 +0000 (0:00:01.344) 0:00:24.031 ********* 2026-04-07 02:06:32.058688 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:06:32.058695 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:06:32.058702 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:06:32.058708 | orchestrator | 2026-04-07 02:06:32.058726 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-07 02:06:32.058740 | orchestrator | Tuesday 07 April 2026 02:05:19 +0000 (0:00:02.283) 0:00:26.314 ********* 2026-04-07 02:06:32.058747 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:06:32.058753 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:06:32.058760 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:06:32.058766 | orchestrator | 2026-04-07 02:06:32.058773 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-07 02:06:32.058780 | orchestrator | Tuesday 07 April 2026 02:05:20 +0000 (0:00:01.199) 0:00:27.513 ********* 2026-04-07 02:06:32.058786 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:06:32.058793 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:06:32.058799 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:06:32.058807 | orchestrator | 2026-04-07 02:06:32.058813 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-07 02:06:32.058836 | orchestrator | Tuesday 07 April 2026 02:05:21 +0000 (0:00:00.889) 0:00:28.403 ********* 2026-04-07 02:06:32.058844 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:06:32.058850 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:06:32.058857 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:06:32.058863 | orchestrator | 2026-04-07 02:06:32.058870 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-07 02:06:32.058876 | orchestrator | Tuesday 07 April 2026 02:05:22 +0000 (0:00:00.814) 0:00:29.217 ********* 2026-04-07 02:06:32.058883 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:06:32.058889 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.058896 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.058902 | orchestrator | 2026-04-07 02:06:32.058909 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-07 02:06:32.058929 | orchestrator | Tuesday 07 April 2026 02:05:22 +0000 (0:00:00.470) 0:00:29.687 ********* 2026-04-07 02:06:32.058981 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:06:32.058992 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:06:32.059003 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:06:32.059014 | orchestrator | 2026-04-07 02:06:32.059024 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-07 02:06:32.059035 | orchestrator | Tuesday 07 April 2026 02:05:23 +0000 (0:00:01.100) 0:00:30.788 ********* 2026-04-07 02:06:32.059044 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:06:32.059052 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:06:32.059060 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:06:32.059068 | orchestrator | 2026-04-07 02:06:32.059075 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-07 02:06:32.059084 | orchestrator | Tuesday 07 April 2026 02:05:25 +0000 (0:00:01.596) 0:00:32.384 ********* 2026-04-07 02:06:32.059091 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:06:32.059099 | orchestrator | 2026-04-07 02:06:32.059107 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-07 02:06:32.059114 | orchestrator | Tuesday 07 April 2026 02:05:25 +0000 (0:00:00.658) 0:00:33.042 ********* 2026-04-07 02:06:32.059121 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:06:32.059129 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:06:32.059137 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:06:32.059144 | orchestrator | 2026-04-07 02:06:32.059151 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-07 02:06:32.059158 | orchestrator | Tuesday 07 April 2026 02:05:28 +0000 (0:00:02.254) 0:00:35.297 ********* 2026-04-07 02:06:32.059166 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.059173 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.059180 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:06:32.059187 | orchestrator | 2026-04-07 02:06:32.059195 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-07 02:06:32.059202 | orchestrator | Tuesday 07 April 2026 02:05:28 +0000 (0:00:00.555) 0:00:35.853 ********* 2026-04-07 02:06:32.059209 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.059217 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.059225 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:06:32.059232 | orchestrator | 2026-04-07 02:06:32.059240 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-07 02:06:32.059248 | orchestrator | Tuesday 07 April 2026 02:05:29 +0000 (0:00:00.811) 0:00:36.664 ********* 2026-04-07 02:06:32.059255 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.059263 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.059270 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:06:32.059278 | orchestrator | 2026-04-07 02:06:32.059285 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-07 02:06:32.059307 | orchestrator | Tuesday 07 April 2026 02:05:30 +0000 (0:00:01.251) 0:00:37.916 ********* 2026-04-07 02:06:32.059315 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:06:32.059328 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.059336 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.059343 | orchestrator | 2026-04-07 02:06:32.059351 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-07 02:06:32.059358 | orchestrator | Tuesday 07 April 2026 02:05:31 +0000 (0:00:00.630) 0:00:38.547 ********* 2026-04-07 02:06:32.059365 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:06:32.059372 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.059380 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.059387 | orchestrator | 2026-04-07 02:06:32.059395 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-07 02:06:32.059402 | orchestrator | Tuesday 07 April 2026 02:05:31 +0000 (0:00:00.325) 0:00:38.872 ********* 2026-04-07 02:06:32.059408 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:06:32.059414 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:06:32.059421 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:06:32.059427 | orchestrator | 2026-04-07 02:06:32.059438 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-07 02:06:32.059445 | orchestrator | Tuesday 07 April 2026 02:05:33 +0000 (0:00:01.368) 0:00:40.241 ********* 2026-04-07 02:06:32.059451 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:06:32.059457 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:06:32.059463 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:06:32.059469 | orchestrator | 2026-04-07 02:06:32.059475 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-07 02:06:32.059482 | orchestrator | Tuesday 07 April 2026 02:05:36 +0000 (0:00:03.157) 0:00:43.398 ********* 2026-04-07 02:06:32.059488 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:06:32.059494 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:06:32.059500 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:06:32.059510 | orchestrator | 2026-04-07 02:06:32.059516 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-07 02:06:32.059523 | orchestrator | Tuesday 07 April 2026 02:05:36 +0000 (0:00:00.425) 0:00:43.823 ********* 2026-04-07 02:06:32.059529 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 02:06:32.059538 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 02:06:32.059544 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 02:06:32.059551 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 02:06:32.059557 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 02:06:32.059563 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 02:06:32.059569 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-07 02:06:32.059575 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-07 02:06:32.059581 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-07 02:06:32.059587 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-07 02:06:32.059593 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-07 02:06:32.059605 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-07 02:06:32.059611 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-07 02:06:32.059617 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-07 02:06:32.059623 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-07 02:06:32.059630 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:06:32.059636 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:06:32.059642 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:06:32.059648 | orchestrator | 2026-04-07 02:06:32.059658 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-07 02:06:32.059664 | orchestrator | Tuesday 07 April 2026 02:06:30 +0000 (0:00:53.943) 0:01:37.767 ********* 2026-04-07 02:06:32.059670 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:06:32.059677 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:06:32.059683 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:06:32.059689 | orchestrator | 2026-04-07 02:06:32.059695 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-07 02:06:32.059702 | orchestrator | Tuesday 07 April 2026 02:06:31 +0000 (0:00:00.338) 0:01:38.106 ********* 2026-04-07 02:06:32.059712 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:07:14.221038 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:07:14.221112 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:07:14.221119 | orchestrator | 2026-04-07 02:07:14.221124 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-07 02:07:14.221130 | orchestrator | Tuesday 07 April 2026 02:06:32 +0000 (0:00:01.012) 0:01:39.118 ********* 2026-04-07 02:07:14.221134 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:07:14.221139 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:07:14.221143 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:07:14.221147 | orchestrator | 2026-04-07 02:07:14.221151 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-07 02:07:14.221155 | orchestrator | Tuesday 07 April 2026 02:06:33 +0000 (0:00:01.240) 0:01:40.359 ********* 2026-04-07 02:07:14.221159 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:07:14.221163 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:07:14.221167 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:07:14.221170 | orchestrator | 2026-04-07 02:07:14.221174 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-07 02:07:14.221178 | orchestrator | Tuesday 07 April 2026 02:06:59 +0000 (0:00:25.727) 0:02:06.087 ********* 2026-04-07 02:07:14.221182 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:07:14.221187 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:07:14.221191 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:07:14.221194 | orchestrator | 2026-04-07 02:07:14.221198 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-07 02:07:14.221202 | orchestrator | Tuesday 07 April 2026 02:06:59 +0000 (0:00:00.652) 0:02:06.739 ********* 2026-04-07 02:07:14.221206 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:07:14.221210 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:07:14.221214 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:07:14.221217 | orchestrator | 2026-04-07 02:07:14.221221 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-07 02:07:14.221225 | orchestrator | Tuesday 07 April 2026 02:07:00 +0000 (0:00:00.693) 0:02:07.433 ********* 2026-04-07 02:07:14.221229 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:07:14.221232 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:07:14.221236 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:07:14.221240 | orchestrator | 2026-04-07 02:07:14.221244 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-07 02:07:14.221267 | orchestrator | Tuesday 07 April 2026 02:07:01 +0000 (0:00:00.749) 0:02:08.182 ********* 2026-04-07 02:07:14.221274 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:07:14.221281 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:07:14.221286 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:07:14.221293 | orchestrator | 2026-04-07 02:07:14.221299 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-07 02:07:14.221306 | orchestrator | Tuesday 07 April 2026 02:07:01 +0000 (0:00:00.886) 0:02:09.069 ********* 2026-04-07 02:07:14.221312 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:07:14.221318 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:07:14.221324 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:07:14.221331 | orchestrator | 2026-04-07 02:07:14.221338 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-07 02:07:14.221344 | orchestrator | Tuesday 07 April 2026 02:07:02 +0000 (0:00:00.319) 0:02:09.388 ********* 2026-04-07 02:07:14.221351 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:07:14.221358 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:07:14.221364 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:07:14.221371 | orchestrator | 2026-04-07 02:07:14.221377 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-07 02:07:14.221380 | orchestrator | Tuesday 07 April 2026 02:07:02 +0000 (0:00:00.664) 0:02:10.053 ********* 2026-04-07 02:07:14.221384 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:07:14.221388 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:07:14.221392 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:07:14.221395 | orchestrator | 2026-04-07 02:07:14.221399 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-07 02:07:14.221403 | orchestrator | Tuesday 07 April 2026 02:07:03 +0000 (0:00:00.654) 0:02:10.708 ********* 2026-04-07 02:07:14.221407 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:07:14.221411 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:07:14.221423 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:07:14.221427 | orchestrator | 2026-04-07 02:07:14.221431 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-07 02:07:14.221435 | orchestrator | Tuesday 07 April 2026 02:07:04 +0000 (0:00:00.929) 0:02:11.637 ********* 2026-04-07 02:07:14.221440 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:07:14.221444 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:07:14.221448 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:07:14.221452 | orchestrator | 2026-04-07 02:07:14.221455 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-07 02:07:14.221459 | orchestrator | Tuesday 07 April 2026 02:07:05 +0000 (0:00:01.100) 0:02:12.737 ********* 2026-04-07 02:07:14.221469 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:07:14.221472 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:07:14.221476 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:07:14.221480 | orchestrator | 2026-04-07 02:07:14.221484 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-07 02:07:14.221487 | orchestrator | Tuesday 07 April 2026 02:07:05 +0000 (0:00:00.305) 0:02:13.043 ********* 2026-04-07 02:07:14.221491 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:07:14.221495 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:07:14.221499 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:07:14.221502 | orchestrator | 2026-04-07 02:07:14.221506 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-07 02:07:14.221510 | orchestrator | Tuesday 07 April 2026 02:07:06 +0000 (0:00:00.332) 0:02:13.376 ********* 2026-04-07 02:07:14.221513 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:07:14.221517 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:07:14.221521 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:07:14.221525 | orchestrator | 2026-04-07 02:07:14.221528 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-07 02:07:14.221532 | orchestrator | Tuesday 07 April 2026 02:07:06 +0000 (0:00:00.645) 0:02:14.021 ********* 2026-04-07 02:07:14.221541 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:07:14.221545 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:07:14.221566 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:07:14.221578 | orchestrator | 2026-04-07 02:07:14.221585 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-07 02:07:14.221592 | orchestrator | Tuesday 07 April 2026 02:07:07 +0000 (0:00:00.915) 0:02:14.936 ********* 2026-04-07 02:07:14.221599 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 02:07:14.221605 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 02:07:14.221612 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 02:07:14.221618 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 02:07:14.221625 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 02:07:14.221632 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 02:07:14.221638 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 02:07:14.221647 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 02:07:14.221653 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 02:07:14.221660 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-07 02:07:14.221667 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 02:07:14.221674 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 02:07:14.221681 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-07 02:07:14.221687 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 02:07:14.221693 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 02:07:14.221697 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 02:07:14.221701 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 02:07:14.221705 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 02:07:14.221709 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 02:07:14.221712 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 02:07:14.221716 | orchestrator | 2026-04-07 02:07:14.221720 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-07 02:07:14.221726 | orchestrator | 2026-04-07 02:07:14.221732 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-07 02:07:14.221739 | orchestrator | Tuesday 07 April 2026 02:07:11 +0000 (0:00:03.144) 0:02:18.080 ********* 2026-04-07 02:07:14.221745 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:07:14.221751 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:07:14.221757 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:07:14.221763 | orchestrator | 2026-04-07 02:07:14.221781 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-07 02:07:14.221788 | orchestrator | Tuesday 07 April 2026 02:07:11 +0000 (0:00:00.345) 0:02:18.426 ********* 2026-04-07 02:07:14.221793 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:07:14.221799 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:07:14.221804 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:07:14.221816 | orchestrator | 2026-04-07 02:07:14.221822 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-07 02:07:14.221828 | orchestrator | Tuesday 07 April 2026 02:07:12 +0000 (0:00:00.864) 0:02:19.290 ********* 2026-04-07 02:07:14.221834 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:07:14.221840 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:07:14.221846 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:07:14.221852 | orchestrator | 2026-04-07 02:07:14.221859 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-07 02:07:14.221865 | orchestrator | Tuesday 07 April 2026 02:07:12 +0000 (0:00:00.358) 0:02:19.649 ********* 2026-04-07 02:07:14.221871 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:07:14.221878 | orchestrator | 2026-04-07 02:07:14.221884 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-07 02:07:14.221890 | orchestrator | Tuesday 07 April 2026 02:07:13 +0000 (0:00:00.551) 0:02:20.201 ********* 2026-04-07 02:07:14.221896 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:07:14.221903 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:07:14.221909 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:07:14.221915 | orchestrator | 2026-04-07 02:07:14.221921 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-07 02:07:14.221927 | orchestrator | Tuesday 07 April 2026 02:07:13 +0000 (0:00:00.529) 0:02:20.730 ********* 2026-04-07 02:07:14.221934 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:07:14.221939 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:07:14.221945 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:07:14.221950 | orchestrator | 2026-04-07 02:07:14.221956 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-07 02:07:14.221985 | orchestrator | Tuesday 07 April 2026 02:07:13 +0000 (0:00:00.325) 0:02:21.056 ********* 2026-04-07 02:07:14.221999 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:08:56.898609 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:08:56.898722 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:08:56.898740 | orchestrator | 2026-04-07 02:08:56.898753 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-07 02:08:56.898766 | orchestrator | Tuesday 07 April 2026 02:07:14 +0000 (0:00:00.374) 0:02:21.431 ********* 2026-04-07 02:08:56.898777 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:08:56.898793 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:08:56.898806 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:08:56.898816 | orchestrator | 2026-04-07 02:08:56.898827 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-07 02:08:56.898838 | orchestrator | Tuesday 07 April 2026 02:07:14 +0000 (0:00:00.620) 0:02:22.051 ********* 2026-04-07 02:08:56.898848 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:08:56.898858 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:08:56.898869 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:08:56.898879 | orchestrator | 2026-04-07 02:08:56.898889 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-07 02:08:56.898901 | orchestrator | Tuesday 07 April 2026 02:07:16 +0000 (0:00:01.381) 0:02:23.432 ********* 2026-04-07 02:08:56.898912 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:08:56.898924 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:08:56.898936 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:08:56.898947 | orchestrator | 2026-04-07 02:08:56.898958 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-07 02:08:56.898969 | orchestrator | Tuesday 07 April 2026 02:07:17 +0000 (0:00:01.332) 0:02:24.765 ********* 2026-04-07 02:08:56.898981 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:08:56.898992 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:08:56.899002 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:08:56.899066 | orchestrator | 2026-04-07 02:08:56.899081 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-07 02:08:56.899119 | orchestrator | 2026-04-07 02:08:56.899127 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-07 02:08:56.899134 | orchestrator | Tuesday 07 April 2026 02:07:27 +0000 (0:00:10.013) 0:02:34.779 ********* 2026-04-07 02:08:56.899141 | orchestrator | ok: [testbed-manager] 2026-04-07 02:08:56.899150 | orchestrator | 2026-04-07 02:08:56.899158 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-07 02:08:56.899166 | orchestrator | Tuesday 07 April 2026 02:07:28 +0000 (0:00:00.848) 0:02:35.627 ********* 2026-04-07 02:08:56.899174 | orchestrator | changed: [testbed-manager] 2026-04-07 02:08:56.899185 | orchestrator | 2026-04-07 02:08:56.899202 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-07 02:08:56.899216 | orchestrator | Tuesday 07 April 2026 02:07:29 +0000 (0:00:00.676) 0:02:36.304 ********* 2026-04-07 02:08:56.899228 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-07 02:08:56.899239 | orchestrator | 2026-04-07 02:08:56.899250 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-07 02:08:56.899260 | orchestrator | Tuesday 07 April 2026 02:07:29 +0000 (0:00:00.573) 0:02:36.877 ********* 2026-04-07 02:08:56.899269 | orchestrator | changed: [testbed-manager] 2026-04-07 02:08:56.899279 | orchestrator | 2026-04-07 02:08:56.899291 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-07 02:08:56.899302 | orchestrator | Tuesday 07 April 2026 02:07:30 +0000 (0:00:00.927) 0:02:37.805 ********* 2026-04-07 02:08:56.899314 | orchestrator | changed: [testbed-manager] 2026-04-07 02:08:56.899326 | orchestrator | 2026-04-07 02:08:56.899337 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-07 02:08:56.899348 | orchestrator | Tuesday 07 April 2026 02:07:31 +0000 (0:00:00.679) 0:02:38.484 ********* 2026-04-07 02:08:56.899361 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 02:08:56.899368 | orchestrator | 2026-04-07 02:08:56.899375 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-07 02:08:56.899382 | orchestrator | Tuesday 07 April 2026 02:07:33 +0000 (0:00:01.660) 0:02:40.145 ********* 2026-04-07 02:08:56.899388 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 02:08:56.899395 | orchestrator | 2026-04-07 02:08:56.899420 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-07 02:08:56.899427 | orchestrator | Tuesday 07 April 2026 02:07:33 +0000 (0:00:00.892) 0:02:41.037 ********* 2026-04-07 02:08:56.899434 | orchestrator | changed: [testbed-manager] 2026-04-07 02:08:56.899440 | orchestrator | 2026-04-07 02:08:56.899447 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-07 02:08:56.899454 | orchestrator | Tuesday 07 April 2026 02:07:34 +0000 (0:00:00.458) 0:02:41.496 ********* 2026-04-07 02:08:56.899460 | orchestrator | changed: [testbed-manager] 2026-04-07 02:08:56.899467 | orchestrator | 2026-04-07 02:08:56.899473 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-07 02:08:56.899480 | orchestrator | 2026-04-07 02:08:56.899486 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-07 02:08:56.899494 | orchestrator | Tuesday 07 April 2026 02:07:34 +0000 (0:00:00.471) 0:02:41.967 ********* 2026-04-07 02:08:56.899501 | orchestrator | ok: [testbed-manager] 2026-04-07 02:08:56.899507 | orchestrator | 2026-04-07 02:08:56.899514 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-07 02:08:56.899520 | orchestrator | Tuesday 07 April 2026 02:07:35 +0000 (0:00:00.165) 0:02:42.132 ********* 2026-04-07 02:08:56.899527 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 02:08:56.899534 | orchestrator | 2026-04-07 02:08:56.899541 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-07 02:08:56.899547 | orchestrator | Tuesday 07 April 2026 02:07:35 +0000 (0:00:00.493) 0:02:42.626 ********* 2026-04-07 02:08:56.899554 | orchestrator | ok: [testbed-manager] 2026-04-07 02:08:56.899560 | orchestrator | 2026-04-07 02:08:56.899575 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-07 02:08:56.899582 | orchestrator | Tuesday 07 April 2026 02:07:36 +0000 (0:00:00.866) 0:02:43.492 ********* 2026-04-07 02:08:56.899588 | orchestrator | ok: [testbed-manager] 2026-04-07 02:08:56.899595 | orchestrator | 2026-04-07 02:08:56.899619 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-07 02:08:56.899626 | orchestrator | Tuesday 07 April 2026 02:07:38 +0000 (0:00:01.770) 0:02:45.263 ********* 2026-04-07 02:08:56.899633 | orchestrator | changed: [testbed-manager] 2026-04-07 02:08:56.899640 | orchestrator | 2026-04-07 02:08:56.899646 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-07 02:08:56.899653 | orchestrator | Tuesday 07 April 2026 02:07:39 +0000 (0:00:00.828) 0:02:46.092 ********* 2026-04-07 02:08:56.899659 | orchestrator | ok: [testbed-manager] 2026-04-07 02:08:56.899666 | orchestrator | 2026-04-07 02:08:56.899673 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-07 02:08:56.899679 | orchestrator | Tuesday 07 April 2026 02:07:39 +0000 (0:00:00.500) 0:02:46.592 ********* 2026-04-07 02:08:56.899687 | orchestrator | changed: [testbed-manager] 2026-04-07 02:08:56.899699 | orchestrator | 2026-04-07 02:08:56.899709 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-07 02:08:56.899720 | orchestrator | Tuesday 07 April 2026 02:07:47 +0000 (0:00:08.201) 0:02:54.794 ********* 2026-04-07 02:08:56.899730 | orchestrator | changed: [testbed-manager] 2026-04-07 02:08:56.899741 | orchestrator | 2026-04-07 02:08:56.899752 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-07 02:08:56.899763 | orchestrator | Tuesday 07 April 2026 02:08:00 +0000 (0:00:13.048) 0:03:07.842 ********* 2026-04-07 02:08:56.899775 | orchestrator | ok: [testbed-manager] 2026-04-07 02:08:56.899786 | orchestrator | 2026-04-07 02:08:56.899797 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-07 02:08:56.899806 | orchestrator | 2026-04-07 02:08:56.899812 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-07 02:08:56.899819 | orchestrator | Tuesday 07 April 2026 02:08:01 +0000 (0:00:00.854) 0:03:08.696 ********* 2026-04-07 02:08:56.899826 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:08:56.899832 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:08:56.899839 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:08:56.899845 | orchestrator | 2026-04-07 02:08:56.899852 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-07 02:08:56.899858 | orchestrator | Tuesday 07 April 2026 02:08:01 +0000 (0:00:00.329) 0:03:09.026 ********* 2026-04-07 02:08:56.899865 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:08:56.899872 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:08:56.899878 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:08:56.899885 | orchestrator | 2026-04-07 02:08:56.899891 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-07 02:08:56.899898 | orchestrator | Tuesday 07 April 2026 02:08:02 +0000 (0:00:00.338) 0:03:09.364 ********* 2026-04-07 02:08:56.899905 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:08:56.899912 | orchestrator | 2026-04-07 02:08:56.899919 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-07 02:08:56.899925 | orchestrator | Tuesday 07 April 2026 02:08:03 +0000 (0:00:00.757) 0:03:10.121 ********* 2026-04-07 02:08:56.899932 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 02:08:56.899938 | orchestrator | 2026-04-07 02:08:56.899945 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-07 02:08:56.899951 | orchestrator | Tuesday 07 April 2026 02:08:03 +0000 (0:00:00.850) 0:03:10.972 ********* 2026-04-07 02:08:56.899958 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 02:08:56.899964 | orchestrator | 2026-04-07 02:08:56.899971 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-07 02:08:56.899984 | orchestrator | Tuesday 07 April 2026 02:08:04 +0000 (0:00:00.882) 0:03:11.854 ********* 2026-04-07 02:08:56.899990 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:08:56.899997 | orchestrator | 2026-04-07 02:08:56.900003 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-07 02:08:56.900010 | orchestrator | Tuesday 07 April 2026 02:08:04 +0000 (0:00:00.123) 0:03:11.978 ********* 2026-04-07 02:08:56.900074 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 02:08:56.900081 | orchestrator | 2026-04-07 02:08:56.900087 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-07 02:08:56.900094 | orchestrator | Tuesday 07 April 2026 02:08:05 +0000 (0:00:01.025) 0:03:13.004 ********* 2026-04-07 02:08:56.900100 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:08:56.900107 | orchestrator | 2026-04-07 02:08:56.900114 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-07 02:08:56.900120 | orchestrator | Tuesday 07 April 2026 02:08:06 +0000 (0:00:00.156) 0:03:13.161 ********* 2026-04-07 02:08:56.900127 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:08:56.900133 | orchestrator | 2026-04-07 02:08:56.900140 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-07 02:08:56.900146 | orchestrator | Tuesday 07 April 2026 02:08:06 +0000 (0:00:00.126) 0:03:13.287 ********* 2026-04-07 02:08:56.900153 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:08:56.900159 | orchestrator | 2026-04-07 02:08:56.900166 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-07 02:08:56.900178 | orchestrator | Tuesday 07 April 2026 02:08:06 +0000 (0:00:00.112) 0:03:13.399 ********* 2026-04-07 02:08:56.900185 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:08:56.900191 | orchestrator | 2026-04-07 02:08:56.900198 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-07 02:08:56.900205 | orchestrator | Tuesday 07 April 2026 02:08:06 +0000 (0:00:00.118) 0:03:13.518 ********* 2026-04-07 02:08:56.900212 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 02:08:56.900218 | orchestrator | 2026-04-07 02:08:56.900225 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-07 02:08:56.900232 | orchestrator | Tuesday 07 April 2026 02:08:12 +0000 (0:00:05.651) 0:03:19.169 ********* 2026-04-07 02:08:56.900238 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-07 02:08:56.900245 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-07 02:08:56.900260 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-07 02:09:21.889867 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-07 02:09:21.889983 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-07 02:09:21.890008 | orchestrator | 2026-04-07 02:09:21.890160 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-07 02:09:21.890173 | orchestrator | Tuesday 07 April 2026 02:08:56 +0000 (0:00:44.789) 0:04:03.959 ********* 2026-04-07 02:09:21.890184 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 02:09:21.890195 | orchestrator | 2026-04-07 02:09:21.890212 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-07 02:09:21.890228 | orchestrator | Tuesday 07 April 2026 02:08:58 +0000 (0:00:01.359) 0:04:05.318 ********* 2026-04-07 02:09:21.890244 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 02:09:21.890260 | orchestrator | 2026-04-07 02:09:21.890277 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-07 02:09:21.890293 | orchestrator | Tuesday 07 April 2026 02:08:59 +0000 (0:00:01.686) 0:04:07.005 ********* 2026-04-07 02:09:21.890307 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 02:09:21.890321 | orchestrator | 2026-04-07 02:09:21.890338 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-07 02:09:21.890356 | orchestrator | Tuesday 07 April 2026 02:09:01 +0000 (0:00:01.370) 0:04:08.375 ********* 2026-04-07 02:09:21.890400 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:09:21.890420 | orchestrator | 2026-04-07 02:09:21.890437 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-07 02:09:21.890455 | orchestrator | Tuesday 07 April 2026 02:09:01 +0000 (0:00:00.124) 0:04:08.500 ********* 2026-04-07 02:09:21.890472 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-07 02:09:21.890490 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-07 02:09:21.890506 | orchestrator | 2026-04-07 02:09:21.890522 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-07 02:09:21.890538 | orchestrator | Tuesday 07 April 2026 02:09:03 +0000 (0:00:01.987) 0:04:10.488 ********* 2026-04-07 02:09:21.890555 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:09:21.890572 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:09:21.890588 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:09:21.890604 | orchestrator | 2026-04-07 02:09:21.890614 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-07 02:09:21.890631 | orchestrator | Tuesday 07 April 2026 02:09:03 +0000 (0:00:00.325) 0:04:10.813 ********* 2026-04-07 02:09:21.890647 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:09:21.890663 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:09:21.890679 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:09:21.890697 | orchestrator | 2026-04-07 02:09:21.890714 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-07 02:09:21.890730 | orchestrator | 2026-04-07 02:09:21.890744 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-07 02:09:21.890754 | orchestrator | Tuesday 07 April 2026 02:09:04 +0000 (0:00:00.896) 0:04:11.709 ********* 2026-04-07 02:09:21.890763 | orchestrator | ok: [testbed-manager] 2026-04-07 02:09:21.890773 | orchestrator | 2026-04-07 02:09:21.890783 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-07 02:09:21.890793 | orchestrator | Tuesday 07 April 2026 02:09:05 +0000 (0:00:00.374) 0:04:12.084 ********* 2026-04-07 02:09:21.890803 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 02:09:21.890812 | orchestrator | 2026-04-07 02:09:21.890822 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-07 02:09:21.890832 | orchestrator | Tuesday 07 April 2026 02:09:05 +0000 (0:00:00.241) 0:04:12.325 ********* 2026-04-07 02:09:21.890841 | orchestrator | changed: [testbed-manager] 2026-04-07 02:09:21.890851 | orchestrator | 2026-04-07 02:09:21.890860 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-07 02:09:21.890870 | orchestrator | 2026-04-07 02:09:21.890880 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-07 02:09:21.890889 | orchestrator | Tuesday 07 April 2026 02:09:11 +0000 (0:00:05.916) 0:04:18.241 ********* 2026-04-07 02:09:21.890899 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:09:21.890908 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:09:21.890918 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:09:21.890927 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:09:21.890937 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:09:21.890946 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:09:21.890956 | orchestrator | 2026-04-07 02:09:21.890965 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-07 02:09:21.890975 | orchestrator | Tuesday 07 April 2026 02:09:11 +0000 (0:00:00.637) 0:04:18.879 ********* 2026-04-07 02:09:21.890985 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 02:09:21.890994 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 02:09:21.891004 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 02:09:21.891013 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 02:09:21.891091 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 02:09:21.891105 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 02:09:21.891114 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 02:09:21.891124 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 02:09:21.891134 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 02:09:21.891165 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 02:09:21.891175 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 02:09:21.891186 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 02:09:21.891195 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 02:09:21.891205 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 02:09:21.891215 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 02:09:21.891241 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 02:09:21.891251 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 02:09:21.891261 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 02:09:21.891270 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 02:09:21.891280 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 02:09:21.891289 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 02:09:21.891299 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 02:09:21.891308 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 02:09:21.891318 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 02:09:21.891327 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 02:09:21.891337 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 02:09:21.891346 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 02:09:21.891355 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 02:09:21.891365 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 02:09:21.891374 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 02:09:21.891385 | orchestrator | 2026-04-07 02:09:21.891402 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-07 02:09:21.891418 | orchestrator | Tuesday 07 April 2026 02:09:20 +0000 (0:00:08.767) 0:04:27.647 ********* 2026-04-07 02:09:21.891435 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:09:21.891452 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:09:21.891467 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:09:21.891483 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:09:21.891500 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:09:21.891515 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:09:21.891532 | orchestrator | 2026-04-07 02:09:21.891547 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-07 02:09:21.891564 | orchestrator | Tuesday 07 April 2026 02:09:21 +0000 (0:00:00.583) 0:04:28.231 ********* 2026-04-07 02:09:21.891581 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:09:21.891608 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:09:21.891625 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:09:21.891640 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:09:21.891656 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:09:21.891672 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:09:21.891689 | orchestrator | 2026-04-07 02:09:21.891705 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:09:21.891722 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:09:21.891742 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-07 02:09:21.891757 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 02:09:21.891774 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 02:09:21.891792 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 02:09:21.891807 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 02:09:21.891824 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 02:09:21.891834 | orchestrator | 2026-04-07 02:09:21.891844 | orchestrator | 2026-04-07 02:09:21.891854 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:09:21.891863 | orchestrator | Tuesday 07 April 2026 02:09:21 +0000 (0:00:00.707) 0:04:28.939 ********* 2026-04-07 02:09:21.891883 | orchestrator | =============================================================================== 2026-04-07 02:09:22.336975 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.94s 2026-04-07 02:09:22.337112 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.79s 2026-04-07 02:09:22.337121 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.73s 2026-04-07 02:09:22.337126 | orchestrator | kubectl : Install required packages ------------------------------------ 13.05s 2026-04-07 02:09:22.337131 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.01s 2026-04-07 02:09:22.337135 | orchestrator | Manage labels ----------------------------------------------------------- 8.77s 2026-04-07 02:09:22.337140 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.20s 2026-04-07 02:09:22.337144 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.92s 2026-04-07 02:09:22.337148 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.65s 2026-04-07 02:09:22.337152 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.06s 2026-04-07 02:09:22.337157 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.16s 2026-04-07 02:09:22.337161 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.14s 2026-04-07 02:09:22.337168 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.28s 2026-04-07 02:09:22.337173 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.25s 2026-04-07 02:09:22.337178 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.03s 2026-04-07 02:09:22.337182 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.99s 2026-04-07 02:09:22.337186 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.98s 2026-04-07 02:09:22.337208 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.77s 2026-04-07 02:09:22.337212 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.69s 2026-04-07 02:09:22.337217 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.66s 2026-04-07 02:09:22.734356 | orchestrator | + osism apply copy-kubeconfig 2026-04-07 02:09:35.158277 | orchestrator | 2026-04-07 02:09:35 | INFO  | Task f33c0034-7df7-41d7-9ec9-2b44180cb4dc (copy-kubeconfig) was prepared for execution. 2026-04-07 02:09:35.158353 | orchestrator | 2026-04-07 02:09:35 | INFO  | It takes a moment until task f33c0034-7df7-41d7-9ec9-2b44180cb4dc (copy-kubeconfig) has been started and output is visible here. 2026-04-07 02:09:42.730402 | orchestrator | 2026-04-07 02:09:42.730499 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-07 02:09:42.730511 | orchestrator | 2026-04-07 02:09:42.730520 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-07 02:09:42.730529 | orchestrator | Tuesday 07 April 2026 02:09:39 +0000 (0:00:00.170) 0:00:00.170 ********* 2026-04-07 02:09:42.730539 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-07 02:09:42.730547 | orchestrator | 2026-04-07 02:09:42.730556 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-07 02:09:42.730584 | orchestrator | Tuesday 07 April 2026 02:09:40 +0000 (0:00:00.767) 0:00:00.938 ********* 2026-04-07 02:09:42.730593 | orchestrator | changed: [testbed-manager] 2026-04-07 02:09:42.730603 | orchestrator | 2026-04-07 02:09:42.730611 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-07 02:09:42.730619 | orchestrator | Tuesday 07 April 2026 02:09:41 +0000 (0:00:01.366) 0:00:02.304 ********* 2026-04-07 02:09:42.730632 | orchestrator | changed: [testbed-manager] 2026-04-07 02:09:42.730655 | orchestrator | 2026-04-07 02:09:42.730667 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:09:42.730676 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:09:42.730686 | orchestrator | 2026-04-07 02:09:42.730694 | orchestrator | 2026-04-07 02:09:42.730702 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:09:42.730710 | orchestrator | Tuesday 07 April 2026 02:09:42 +0000 (0:00:00.487) 0:00:02.792 ********* 2026-04-07 02:09:42.730719 | orchestrator | =============================================================================== 2026-04-07 02:09:42.730727 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.37s 2026-04-07 02:09:42.730736 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2026-04-07 02:09:42.730745 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.49s 2026-04-07 02:09:43.121759 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-04-07 02:09:55.453091 | orchestrator | 2026-04-07 02:09:55 | INFO  | Task 1d06a826-c5e4-4876-af21-204de1ec7bbd (openstackclient) was prepared for execution. 2026-04-07 02:09:55.453174 | orchestrator | 2026-04-07 02:09:55 | INFO  | It takes a moment until task 1d06a826-c5e4-4876-af21-204de1ec7bbd (openstackclient) has been started and output is visible here. 2026-04-07 02:10:43.600950 | orchestrator | 2026-04-07 02:10:43.601172 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-07 02:10:43.601199 | orchestrator | 2026-04-07 02:10:43.601215 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-07 02:10:43.601230 | orchestrator | Tuesday 07 April 2026 02:10:00 +0000 (0:00:00.273) 0:00:00.273 ********* 2026-04-07 02:10:43.601249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-07 02:10:43.601265 | orchestrator | 2026-04-07 02:10:43.601312 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-07 02:10:43.601327 | orchestrator | Tuesday 07 April 2026 02:10:00 +0000 (0:00:00.244) 0:00:00.518 ********* 2026-04-07 02:10:43.601343 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-07 02:10:43.601360 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-07 02:10:43.601375 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-07 02:10:43.601388 | orchestrator | 2026-04-07 02:10:43.601402 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-07 02:10:43.601417 | orchestrator | Tuesday 07 April 2026 02:10:01 +0000 (0:00:01.469) 0:00:01.988 ********* 2026-04-07 02:10:43.601433 | orchestrator | changed: [testbed-manager] 2026-04-07 02:10:43.601448 | orchestrator | 2026-04-07 02:10:43.601463 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-07 02:10:43.601477 | orchestrator | Tuesday 07 April 2026 02:10:03 +0000 (0:00:01.612) 0:00:03.600 ********* 2026-04-07 02:10:43.601494 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-07 02:10:43.601511 | orchestrator | ok: [testbed-manager] 2026-04-07 02:10:43.601528 | orchestrator | 2026-04-07 02:10:43.601544 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-07 02:10:43.601560 | orchestrator | Tuesday 07 April 2026 02:10:38 +0000 (0:00:34.601) 0:00:38.202 ********* 2026-04-07 02:10:43.601575 | orchestrator | changed: [testbed-manager] 2026-04-07 02:10:43.601590 | orchestrator | 2026-04-07 02:10:43.601605 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-07 02:10:43.601620 | orchestrator | Tuesday 07 April 2026 02:10:39 +0000 (0:00:00.960) 0:00:39.163 ********* 2026-04-07 02:10:43.601635 | orchestrator | ok: [testbed-manager] 2026-04-07 02:10:43.601650 | orchestrator | 2026-04-07 02:10:43.601664 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-07 02:10:43.601679 | orchestrator | Tuesday 07 April 2026 02:10:39 +0000 (0:00:00.666) 0:00:39.830 ********* 2026-04-07 02:10:43.601693 | orchestrator | changed: [testbed-manager] 2026-04-07 02:10:43.601706 | orchestrator | 2026-04-07 02:10:43.601719 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-07 02:10:43.601733 | orchestrator | Tuesday 07 April 2026 02:10:41 +0000 (0:00:01.525) 0:00:41.355 ********* 2026-04-07 02:10:43.601748 | orchestrator | changed: [testbed-manager] 2026-04-07 02:10:43.601764 | orchestrator | 2026-04-07 02:10:43.601779 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-07 02:10:43.601794 | orchestrator | Tuesday 07 April 2026 02:10:42 +0000 (0:00:00.809) 0:00:42.164 ********* 2026-04-07 02:10:43.601811 | orchestrator | changed: [testbed-manager] 2026-04-07 02:10:43.601825 | orchestrator | 2026-04-07 02:10:43.601841 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-07 02:10:43.601853 | orchestrator | Tuesday 07 April 2026 02:10:42 +0000 (0:00:00.658) 0:00:42.822 ********* 2026-04-07 02:10:43.601862 | orchestrator | ok: [testbed-manager] 2026-04-07 02:10:43.601870 | orchestrator | 2026-04-07 02:10:43.601879 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:10:43.601888 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:10:43.601898 | orchestrator | 2026-04-07 02:10:43.601906 | orchestrator | 2026-04-07 02:10:43.601915 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:10:43.601924 | orchestrator | Tuesday 07 April 2026 02:10:43 +0000 (0:00:00.441) 0:00:43.264 ********* 2026-04-07 02:10:43.601932 | orchestrator | =============================================================================== 2026-04-07 02:10:43.601941 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.60s 2026-04-07 02:10:43.601949 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.61s 2026-04-07 02:10:43.601972 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.53s 2026-04-07 02:10:43.601981 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.47s 2026-04-07 02:10:43.601989 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.96s 2026-04-07 02:10:43.601998 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.81s 2026-04-07 02:10:43.602006 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.67s 2026-04-07 02:10:43.602112 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.66s 2026-04-07 02:10:43.602125 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-04-07 02:10:43.602134 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.24s 2026-04-07 02:10:46.245533 | orchestrator | 2026-04-07 02:10:46 | INFO  | Task 04df26a8-fa9b-4927-bb81-fe8f42c87805 (common) was prepared for execution. 2026-04-07 02:10:46.245632 | orchestrator | 2026-04-07 02:10:46 | INFO  | It takes a moment until task 04df26a8-fa9b-4927-bb81-fe8f42c87805 (common) has been started and output is visible here. 2026-04-07 02:10:59.420167 | orchestrator | 2026-04-07 02:10:59.420264 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-07 02:10:59.420278 | orchestrator | 2026-04-07 02:10:59.420287 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-07 02:10:59.420295 | orchestrator | Tuesday 07 April 2026 02:10:50 +0000 (0:00:00.304) 0:00:00.304 ********* 2026-04-07 02:10:59.420304 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:10:59.420314 | orchestrator | 2026-04-07 02:10:59.420322 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-07 02:10:59.420330 | orchestrator | Tuesday 07 April 2026 02:10:52 +0000 (0:00:01.374) 0:00:01.678 ********* 2026-04-07 02:10:59.420337 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 02:10:59.420345 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 02:10:59.420354 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 02:10:59.420362 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 02:10:59.420370 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 02:10:59.420378 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 02:10:59.420385 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 02:10:59.420393 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 02:10:59.420416 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 02:10:59.420425 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 02:10:59.420432 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 02:10:59.420441 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 02:10:59.420449 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 02:10:59.420457 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 02:10:59.420464 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 02:10:59.420473 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 02:10:59.420481 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 02:10:59.420505 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 02:10:59.420514 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 02:10:59.420522 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 02:10:59.420530 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 02:10:59.420537 | orchestrator | 2026-04-07 02:10:59.420545 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-07 02:10:59.420553 | orchestrator | Tuesday 07 April 2026 02:10:55 +0000 (0:00:02.860) 0:00:04.539 ********* 2026-04-07 02:10:59.420561 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:10:59.420571 | orchestrator | 2026-04-07 02:10:59.420579 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-07 02:10:59.420590 | orchestrator | Tuesday 07 April 2026 02:10:56 +0000 (0:00:01.528) 0:00:06.068 ********* 2026-04-07 02:10:59.420601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:10:59.420613 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:10:59.420649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:10:59.420666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:10:59.420682 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:10:59.420696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:10:59.420729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:10:59.420745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:10:59.420760 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:10:59.420784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616459 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:00.616512 | orchestrator | 2026-04-07 02:11:00.616517 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-07 02:11:00.616522 | orchestrator | Tuesday 07 April 2026 02:11:00 +0000 (0:00:03.713) 0:00:09.781 ********* 2026-04-07 02:11:00.616528 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:00.616532 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:00.616536 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:00.616541 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:11:00.616546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:00.616556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304630 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:11:01.304689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:01.304704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304723 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:11:01.304732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:01.304747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304765 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:11:01.304795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:01.304819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304838 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:11:01.304847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:01.304856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:01.304876 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:11:01.304893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:01.304908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284128 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:11:02.284141 | orchestrator | 2026-04-07 02:11:02.284150 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-07 02:11:02.284159 | orchestrator | Tuesday 07 April 2026 02:11:01 +0000 (0:00:01.041) 0:00:10.823 ********* 2026-04-07 02:11:02.284168 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:02.284178 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284187 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:02.284223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284261 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:11:02.284269 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:11:02.284298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:02.284306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284322 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:11:02.284330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:02.284337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:02.284362 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:11:02.284369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:02.284393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:07.505507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:07.505582 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:11:07.505592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:07.505599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:07.505604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:07.505609 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:11:07.505613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 02:11:07.505633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:07.505638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:07.505643 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:11:07.505647 | orchestrator | 2026-04-07 02:11:07.505652 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-07 02:11:07.505658 | orchestrator | Tuesday 07 April 2026 02:11:03 +0000 (0:00:01.954) 0:00:12.778 ********* 2026-04-07 02:11:07.505662 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:11:07.505666 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:11:07.505670 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:11:07.505674 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:11:07.505688 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:11:07.505693 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:11:07.505697 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:11:07.505701 | orchestrator | 2026-04-07 02:11:07.505705 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-07 02:11:07.505709 | orchestrator | Tuesday 07 April 2026 02:11:04 +0000 (0:00:00.764) 0:00:13.543 ********* 2026-04-07 02:11:07.505713 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:11:07.505718 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:11:07.505722 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:11:07.505726 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:11:07.505730 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:11:07.505734 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:11:07.505738 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:11:07.505742 | orchestrator | 2026-04-07 02:11:07.505746 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-07 02:11:07.505751 | orchestrator | Tuesday 07 April 2026 02:11:04 +0000 (0:00:00.929) 0:00:14.473 ********* 2026-04-07 02:11:07.505756 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:07.505772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:07.505781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:07.505788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:07.505792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:07.505797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:07.505810 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:10.460158 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460444 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460484 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460491 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:10.460507 | orchestrator | 2026-04-07 02:11:10.460517 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-07 02:11:10.460527 | orchestrator | Tuesday 07 April 2026 02:11:08 +0000 (0:00:03.454) 0:00:17.927 ********* 2026-04-07 02:11:10.460535 | orchestrator | [WARNING]: Skipped 2026-04-07 02:11:10.460545 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-07 02:11:10.460556 | orchestrator | to this access issue: 2026-04-07 02:11:10.460565 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-07 02:11:10.460573 | orchestrator | directory 2026-04-07 02:11:10.460582 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 02:11:10.460591 | orchestrator | 2026-04-07 02:11:10.460600 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-07 02:11:10.460608 | orchestrator | Tuesday 07 April 2026 02:11:09 +0000 (0:00:00.990) 0:00:18.918 ********* 2026-04-07 02:11:10.460617 | orchestrator | [WARNING]: Skipped 2026-04-07 02:11:10.460630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-07 02:11:21.027870 | orchestrator | to this access issue: 2026-04-07 02:11:21.027974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-07 02:11:21.027988 | orchestrator | directory 2026-04-07 02:11:21.027999 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 02:11:21.028009 | orchestrator | 2026-04-07 02:11:21.028019 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-07 02:11:21.028029 | orchestrator | Tuesday 07 April 2026 02:11:10 +0000 (0:00:01.377) 0:00:20.295 ********* 2026-04-07 02:11:21.028060 | orchestrator | [WARNING]: Skipped 2026-04-07 02:11:21.028069 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-07 02:11:21.028105 | orchestrator | to this access issue: 2026-04-07 02:11:21.028114 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-07 02:11:21.028122 | orchestrator | directory 2026-04-07 02:11:21.028130 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 02:11:21.028139 | orchestrator | 2026-04-07 02:11:21.028147 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-07 02:11:21.028156 | orchestrator | Tuesday 07 April 2026 02:11:11 +0000 (0:00:00.897) 0:00:21.193 ********* 2026-04-07 02:11:21.028165 | orchestrator | [WARNING]: Skipped 2026-04-07 02:11:21.028174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-07 02:11:21.028183 | orchestrator | to this access issue: 2026-04-07 02:11:21.028191 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-07 02:11:21.028200 | orchestrator | directory 2026-04-07 02:11:21.028209 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 02:11:21.028217 | orchestrator | 2026-04-07 02:11:21.028226 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-07 02:11:21.028234 | orchestrator | Tuesday 07 April 2026 02:11:12 +0000 (0:00:00.864) 0:00:22.057 ********* 2026-04-07 02:11:21.028243 | orchestrator | changed: [testbed-manager] 2026-04-07 02:11:21.028252 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:11:21.028260 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:11:21.028268 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:11:21.028276 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:11:21.028284 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:11:21.028311 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:11:21.028320 | orchestrator | 2026-04-07 02:11:21.028328 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-07 02:11:21.028336 | orchestrator | Tuesday 07 April 2026 02:11:15 +0000 (0:00:02.640) 0:00:24.698 ********* 2026-04-07 02:11:21.028344 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 02:11:21.028353 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 02:11:21.028362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 02:11:21.028370 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 02:11:21.028379 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 02:11:21.028388 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 02:11:21.028399 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 02:11:21.028404 | orchestrator | 2026-04-07 02:11:21.028411 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-07 02:11:21.028418 | orchestrator | Tuesday 07 April 2026 02:11:17 +0000 (0:00:02.171) 0:00:26.869 ********* 2026-04-07 02:11:21.028423 | orchestrator | changed: [testbed-manager] 2026-04-07 02:11:21.028430 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:11:21.028435 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:11:21.028442 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:11:21.028448 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:11:21.028454 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:11:21.028460 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:11:21.028466 | orchestrator | 2026-04-07 02:11:21.028473 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-07 02:11:21.028487 | orchestrator | Tuesday 07 April 2026 02:11:19 +0000 (0:00:01.948) 0:00:28.818 ********* 2026-04-07 02:11:21.028495 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:21.028520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:21.028528 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:21.028534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:21.028541 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:21.028547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:21.028557 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:21.028571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:21.028584 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:21.028598 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:26.961957 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:26.962163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:26.962183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:26.962208 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:26.962217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:26.962243 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:26.962251 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:26.962281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:11:26.962289 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:26.962297 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:26.962304 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:26.962312 | orchestrator | 2026-04-07 02:11:26.962320 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-07 02:11:26.962329 | orchestrator | Tuesday 07 April 2026 02:11:21 +0000 (0:00:01.729) 0:00:30.547 ********* 2026-04-07 02:11:26.962336 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 02:11:26.962345 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 02:11:26.962358 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 02:11:26.962364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 02:11:26.962371 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 02:11:26.962377 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 02:11:26.962384 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 02:11:26.962390 | orchestrator | 2026-04-07 02:11:26.962398 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-07 02:11:26.962405 | orchestrator | Tuesday 07 April 2026 02:11:23 +0000 (0:00:02.053) 0:00:32.600 ********* 2026-04-07 02:11:26.962412 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 02:11:26.962421 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 02:11:26.962428 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 02:11:26.962442 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 02:11:26.962448 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 02:11:26.962456 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 02:11:26.962463 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 02:11:26.962471 | orchestrator | 2026-04-07 02:11:26.962479 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-07 02:11:26.962486 | orchestrator | Tuesday 07 April 2026 02:11:24 +0000 (0:00:01.721) 0:00:34.322 ********* 2026-04-07 02:11:26.962494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:26.962509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:27.539472 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:27.539612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:27.539679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:27.539725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:27.539747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 02:11:27.539808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539820 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:11:27.539927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:12:53.769166 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:12:53.769274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:12:53.769282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:12:53.769296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:12:53.769301 | orchestrator | 2026-04-07 02:12:53.769306 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-07 02:12:53.769311 | orchestrator | Tuesday 07 April 2026 02:11:27 +0000 (0:00:02.739) 0:00:37.061 ********* 2026-04-07 02:12:53.769315 | orchestrator | changed: [testbed-manager] 2026-04-07 02:12:53.769321 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:12:53.769325 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:12:53.769328 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:12:53.769360 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:12:53.769366 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:12:53.769370 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:12:53.769374 | orchestrator | 2026-04-07 02:12:53.769379 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-07 02:12:53.769383 | orchestrator | Tuesday 07 April 2026 02:11:28 +0000 (0:00:01.410) 0:00:38.472 ********* 2026-04-07 02:12:53.769387 | orchestrator | changed: [testbed-manager] 2026-04-07 02:12:53.769391 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:12:53.769395 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:12:53.769399 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:12:53.769403 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:12:53.769407 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:12:53.769411 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:12:53.769414 | orchestrator | 2026-04-07 02:12:53.769418 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 02:12:53.769422 | orchestrator | Tuesday 07 April 2026 02:11:30 +0000 (0:00:01.101) 0:00:39.573 ********* 2026-04-07 02:12:53.769426 | orchestrator | 2026-04-07 02:12:53.769430 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 02:12:53.769434 | orchestrator | Tuesday 07 April 2026 02:11:30 +0000 (0:00:00.071) 0:00:39.645 ********* 2026-04-07 02:12:53.769438 | orchestrator | 2026-04-07 02:12:53.769441 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 02:12:53.769445 | orchestrator | Tuesday 07 April 2026 02:11:30 +0000 (0:00:00.084) 0:00:39.729 ********* 2026-04-07 02:12:53.769449 | orchestrator | 2026-04-07 02:12:53.769453 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 02:12:53.769457 | orchestrator | Tuesday 07 April 2026 02:11:30 +0000 (0:00:00.067) 0:00:39.796 ********* 2026-04-07 02:12:53.769461 | orchestrator | 2026-04-07 02:12:53.769464 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 02:12:53.769473 | orchestrator | Tuesday 07 April 2026 02:11:30 +0000 (0:00:00.262) 0:00:40.059 ********* 2026-04-07 02:12:53.769476 | orchestrator | 2026-04-07 02:12:53.769480 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 02:12:53.769484 | orchestrator | Tuesday 07 April 2026 02:11:30 +0000 (0:00:00.060) 0:00:40.120 ********* 2026-04-07 02:12:53.769488 | orchestrator | 2026-04-07 02:12:53.769492 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 02:12:53.769496 | orchestrator | Tuesday 07 April 2026 02:11:30 +0000 (0:00:00.065) 0:00:40.185 ********* 2026-04-07 02:12:53.769500 | orchestrator | 2026-04-07 02:12:53.769503 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-07 02:12:53.769507 | orchestrator | Tuesday 07 April 2026 02:11:30 +0000 (0:00:00.091) 0:00:40.277 ********* 2026-04-07 02:12:53.769511 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:12:53.769515 | orchestrator | changed: [testbed-manager] 2026-04-07 02:12:53.769519 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:12:53.769523 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:12:53.769527 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:12:53.769540 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:12:53.769544 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:12:53.769548 | orchestrator | 2026-04-07 02:12:53.769552 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-07 02:12:53.769556 | orchestrator | Tuesday 07 April 2026 02:12:11 +0000 (0:00:40.366) 0:01:20.643 ********* 2026-04-07 02:12:53.769560 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:12:53.769564 | orchestrator | changed: [testbed-manager] 2026-04-07 02:12:53.769567 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:12:53.769571 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:12:53.769575 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:12:53.769579 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:12:53.769583 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:12:53.769586 | orchestrator | 2026-04-07 02:12:53.769590 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-07 02:12:53.769594 | orchestrator | Tuesday 07 April 2026 02:12:43 +0000 (0:00:32.737) 0:01:53.381 ********* 2026-04-07 02:12:53.769598 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:12:53.769603 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:12:53.769606 | orchestrator | ok: [testbed-manager] 2026-04-07 02:12:53.769610 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:12:53.769614 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:12:53.769618 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:12:53.769622 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:12:53.769625 | orchestrator | 2026-04-07 02:12:53.769629 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-07 02:12:53.769633 | orchestrator | Tuesday 07 April 2026 02:12:45 +0000 (0:00:01.924) 0:01:55.305 ********* 2026-04-07 02:12:53.769637 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:12:53.769640 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:12:53.769644 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:12:53.769648 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:12:53.769652 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:12:53.769656 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:12:53.769659 | orchestrator | changed: [testbed-manager] 2026-04-07 02:12:53.769663 | orchestrator | 2026-04-07 02:12:53.769667 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:12:53.769672 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 02:12:53.769677 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 02:12:53.769686 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 02:12:53.769693 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 02:12:53.769698 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 02:12:53.769703 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 02:12:53.769707 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 02:12:53.769711 | orchestrator | 2026-04-07 02:12:53.769716 | orchestrator | 2026-04-07 02:12:53.769721 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:12:53.769725 | orchestrator | Tuesday 07 April 2026 02:12:53 +0000 (0:00:07.966) 0:02:03.272 ********* 2026-04-07 02:12:53.769729 | orchestrator | =============================================================================== 2026-04-07 02:12:53.769734 | orchestrator | common : Restart fluentd container ------------------------------------- 40.37s 2026-04-07 02:12:53.769738 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.74s 2026-04-07 02:12:53.769743 | orchestrator | common : Restart cron container ----------------------------------------- 7.97s 2026-04-07 02:12:53.769747 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.71s 2026-04-07 02:12:53.769752 | orchestrator | common : Copying over config.json files for services -------------------- 3.45s 2026-04-07 02:12:53.769756 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.86s 2026-04-07 02:12:53.769761 | orchestrator | common : Check common containers ---------------------------------------- 2.74s 2026-04-07 02:12:53.769765 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.64s 2026-04-07 02:12:53.769769 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.17s 2026-04-07 02:12:53.769773 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.05s 2026-04-07 02:12:53.769777 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.95s 2026-04-07 02:12:53.769781 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.95s 2026-04-07 02:12:53.769784 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.92s 2026-04-07 02:12:53.769788 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.73s 2026-04-07 02:12:53.769792 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.72s 2026-04-07 02:12:53.769796 | orchestrator | common : include_tasks -------------------------------------------------- 1.53s 2026-04-07 02:12:53.769803 | orchestrator | common : Creating log volume -------------------------------------------- 1.41s 2026-04-07 02:12:54.270757 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.38s 2026-04-07 02:12:54.270858 | orchestrator | common : include_tasks -------------------------------------------------- 1.37s 2026-04-07 02:12:54.270869 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.10s 2026-04-07 02:12:56.945446 | orchestrator | 2026-04-07 02:12:56 | INFO  | Task 999da177-2ba6-4a08-87cf-ba2cfc8e9084 (loadbalancer) was prepared for execution. 2026-04-07 02:12:56.945547 | orchestrator | 2026-04-07 02:12:56 | INFO  | It takes a moment until task 999da177-2ba6-4a08-87cf-ba2cfc8e9084 (loadbalancer) has been started and output is visible here. 2026-04-07 02:13:11.888493 | orchestrator | 2026-04-07 02:13:11.888574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:13:11.888583 | orchestrator | 2026-04-07 02:13:11.888589 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:13:11.888611 | orchestrator | Tuesday 07 April 2026 02:13:01 +0000 (0:00:00.278) 0:00:00.278 ********* 2026-04-07 02:13:11.888616 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:13:11.888623 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:13:11.888628 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:13:11.888632 | orchestrator | 2026-04-07 02:13:11.888637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:13:11.888642 | orchestrator | Tuesday 07 April 2026 02:13:01 +0000 (0:00:00.351) 0:00:00.629 ********* 2026-04-07 02:13:11.888648 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-07 02:13:11.888653 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-07 02:13:11.888657 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-07 02:13:11.888662 | orchestrator | 2026-04-07 02:13:11.888666 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-07 02:13:11.888671 | orchestrator | 2026-04-07 02:13:11.888675 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-07 02:13:11.888680 | orchestrator | Tuesday 07 April 2026 02:13:02 +0000 (0:00:00.554) 0:00:01.184 ********* 2026-04-07 02:13:11.888696 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:13:11.888701 | orchestrator | 2026-04-07 02:13:11.888706 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-07 02:13:11.888711 | orchestrator | Tuesday 07 April 2026 02:13:02 +0000 (0:00:00.614) 0:00:01.798 ********* 2026-04-07 02:13:11.888715 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:13:11.888720 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:13:11.888724 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:13:11.888729 | orchestrator | 2026-04-07 02:13:11.888734 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-07 02:13:11.888738 | orchestrator | Tuesday 07 April 2026 02:13:03 +0000 (0:00:00.743) 0:00:02.542 ********* 2026-04-07 02:13:11.888746 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:13:11.888753 | orchestrator | 2026-04-07 02:13:11.888760 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-07 02:13:11.888767 | orchestrator | Tuesday 07 April 2026 02:13:04 +0000 (0:00:00.719) 0:00:03.261 ********* 2026-04-07 02:13:11.888774 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:13:11.888781 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:13:11.888789 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:13:11.888796 | orchestrator | 2026-04-07 02:13:11.888802 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-07 02:13:11.888807 | orchestrator | Tuesday 07 April 2026 02:13:05 +0000 (0:00:00.648) 0:00:03.910 ********* 2026-04-07 02:13:11.888811 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 02:13:11.888816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 02:13:11.888821 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 02:13:11.888825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 02:13:11.888830 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 02:13:11.888834 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 02:13:11.888839 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 02:13:11.888844 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 02:13:11.888849 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 02:13:11.888854 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 02:13:11.888863 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 02:13:11.888871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 02:13:11.888878 | orchestrator | 2026-04-07 02:13:11.888885 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-07 02:13:11.888894 | orchestrator | Tuesday 07 April 2026 02:13:07 +0000 (0:00:02.295) 0:00:06.205 ********* 2026-04-07 02:13:11.888899 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-07 02:13:11.888904 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-07 02:13:11.888909 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-07 02:13:11.888914 | orchestrator | 2026-04-07 02:13:11.888924 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-07 02:13:11.888931 | orchestrator | Tuesday 07 April 2026 02:13:08 +0000 (0:00:00.772) 0:00:06.978 ********* 2026-04-07 02:13:11.888942 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-07 02:13:11.888952 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-07 02:13:11.888958 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-07 02:13:11.888965 | orchestrator | 2026-04-07 02:13:11.888972 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-07 02:13:11.888979 | orchestrator | Tuesday 07 April 2026 02:13:09 +0000 (0:00:01.283) 0:00:08.261 ********* 2026-04-07 02:13:11.888986 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-07 02:13:11.888995 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:11.889017 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-07 02:13:11.889024 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:11.889033 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-07 02:13:11.889039 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:11.889045 | orchestrator | 2026-04-07 02:13:11.889054 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-07 02:13:11.889062 | orchestrator | Tuesday 07 April 2026 02:13:09 +0000 (0:00:00.539) 0:00:08.801 ********* 2026-04-07 02:13:11.889073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:11.889091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:11.889099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:11.889142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:11.889151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:11.889165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:17.367741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:17.367849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:17.367867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:17.367879 | orchestrator | 2026-04-07 02:13:17.367891 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-07 02:13:17.367903 | orchestrator | Tuesday 07 April 2026 02:13:11 +0000 (0:00:01.899) 0:00:10.700 ********* 2026-04-07 02:13:17.367936 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:13:17.367948 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:13:17.367959 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:13:17.367970 | orchestrator | 2026-04-07 02:13:17.367981 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-07 02:13:17.367992 | orchestrator | Tuesday 07 April 2026 02:13:12 +0000 (0:00:00.968) 0:00:11.669 ********* 2026-04-07 02:13:17.368003 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-07 02:13:17.368014 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-07 02:13:17.368025 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-07 02:13:17.368035 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-07 02:13:17.368045 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-07 02:13:17.368055 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-07 02:13:17.368065 | orchestrator | 2026-04-07 02:13:17.368076 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-07 02:13:17.368086 | orchestrator | Tuesday 07 April 2026 02:13:14 +0000 (0:00:01.509) 0:00:13.179 ********* 2026-04-07 02:13:17.368097 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:13:17.368107 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:13:17.368149 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:13:17.368159 | orchestrator | 2026-04-07 02:13:17.368170 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-07 02:13:17.368180 | orchestrator | Tuesday 07 April 2026 02:13:15 +0000 (0:00:00.949) 0:00:14.128 ********* 2026-04-07 02:13:17.368190 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:13:17.368201 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:13:17.368210 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:13:17.368218 | orchestrator | 2026-04-07 02:13:17.368228 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-07 02:13:17.368239 | orchestrator | Tuesday 07 April 2026 02:13:16 +0000 (0:00:01.396) 0:00:15.525 ********* 2026-04-07 02:13:17.368250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:17.368283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:17.368295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:17.368309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 02:13:17.368330 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:17.368342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:17.368397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:17.368409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:17.368421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 02:13:17.368432 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:17.368451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:20.253951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:20.254106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:20.254161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 02:13:20.254172 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:20.254183 | orchestrator | 2026-04-07 02:13:20.254194 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-07 02:13:20.254204 | orchestrator | Tuesday 07 April 2026 02:13:17 +0000 (0:00:00.657) 0:00:16.182 ********* 2026-04-07 02:13:20.254213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:20.254224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:20.254233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:20.254299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:20.254310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:20.254319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:20.254328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 02:13:20.254338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:20.254360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 02:13:20.254398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:29.350658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:29.350723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8', '__omit_place_holder__908948f13ceddcd66a5365583e5ea486531919a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 02:13:29.350734 | orchestrator | 2026-04-07 02:13:29.350744 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-07 02:13:29.350754 | orchestrator | Tuesday 07 April 2026 02:13:20 +0000 (0:00:02.877) 0:00:19.060 ********* 2026-04-07 02:13:29.350762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:29.350773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:29.350781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:29.350804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:29.350846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:29.350864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:29.350873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:29.350882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:29.350890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:29.350898 | orchestrator | 2026-04-07 02:13:29.350906 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-07 02:13:29.350914 | orchestrator | Tuesday 07 April 2026 02:13:23 +0000 (0:00:03.185) 0:00:22.246 ********* 2026-04-07 02:13:29.350929 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 02:13:29.350938 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 02:13:29.350946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 02:13:29.350954 | orchestrator | 2026-04-07 02:13:29.350962 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-07 02:13:29.350970 | orchestrator | Tuesday 07 April 2026 02:13:25 +0000 (0:00:02.018) 0:00:24.264 ********* 2026-04-07 02:13:29.350978 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 02:13:29.350985 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 02:13:29.350993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 02:13:29.351001 | orchestrator | 2026-04-07 02:13:29.351009 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-07 02:13:29.351016 | orchestrator | Tuesday 07 April 2026 02:13:28 +0000 (0:00:03.224) 0:00:27.489 ********* 2026-04-07 02:13:29.351025 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:29.351034 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:29.351042 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:29.351050 | orchestrator | 2026-04-07 02:13:29.351063 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-07 02:13:41.495950 | orchestrator | Tuesday 07 April 2026 02:13:29 +0000 (0:00:00.677) 0:00:28.167 ********* 2026-04-07 02:13:41.496087 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 02:13:41.496195 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 02:13:41.496218 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 02:13:41.496237 | orchestrator | 2026-04-07 02:13:41.496258 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-07 02:13:41.496277 | orchestrator | Tuesday 07 April 2026 02:13:31 +0000 (0:00:02.170) 0:00:30.337 ********* 2026-04-07 02:13:41.496296 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 02:13:41.496315 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 02:13:41.496333 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 02:13:41.496350 | orchestrator | 2026-04-07 02:13:41.496369 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-07 02:13:41.496387 | orchestrator | Tuesday 07 April 2026 02:13:33 +0000 (0:00:02.306) 0:00:32.643 ********* 2026-04-07 02:13:41.496407 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-07 02:13:41.496427 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-07 02:13:41.496444 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-07 02:13:41.496462 | orchestrator | 2026-04-07 02:13:41.496498 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-07 02:13:41.496519 | orchestrator | Tuesday 07 April 2026 02:13:35 +0000 (0:00:01.506) 0:00:34.150 ********* 2026-04-07 02:13:41.496542 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-07 02:13:41.496564 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-07 02:13:41.496586 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-07 02:13:41.496608 | orchestrator | 2026-04-07 02:13:41.496664 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-07 02:13:41.496688 | orchestrator | Tuesday 07 April 2026 02:13:36 +0000 (0:00:01.588) 0:00:35.739 ********* 2026-04-07 02:13:41.496710 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:13:41.496732 | orchestrator | 2026-04-07 02:13:41.496754 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-07 02:13:41.496775 | orchestrator | Tuesday 07 April 2026 02:13:37 +0000 (0:00:00.589) 0:00:36.328 ********* 2026-04-07 02:13:41.496801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:41.496826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:41.496857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:41.496910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:41.496932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:41.496954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:41.496987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:41.497012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:41.497033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:41.497054 | orchestrator | 2026-04-07 02:13:41.497074 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-07 02:13:41.497095 | orchestrator | Tuesday 07 April 2026 02:13:40 +0000 (0:00:03.365) 0:00:39.694 ********* 2026-04-07 02:13:41.497271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:42.310267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:42.310372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:42.310435 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:42.310456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:42.310474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:42.310485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:42.310494 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:42.310503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:42.310542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:42.310553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:42.310569 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:42.310578 | orchestrator | 2026-04-07 02:13:42.310589 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-07 02:13:42.310598 | orchestrator | Tuesday 07 April 2026 02:13:41 +0000 (0:00:00.617) 0:00:40.311 ********* 2026-04-07 02:13:42.310608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:42.310618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:42.310627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:42.310636 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:42.310645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:42.310663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:43.213204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:43.213315 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:43.213331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:43.213342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:43.213352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:43.213361 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:43.213370 | orchestrator | 2026-04-07 02:13:43.213380 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-07 02:13:43.213390 | orchestrator | Tuesday 07 April 2026 02:13:42 +0000 (0:00:00.812) 0:00:41.124 ********* 2026-04-07 02:13:43.213399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:43.213408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:43.213434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:43.213450 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:43.213459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:43.213468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:43.213478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:43.213486 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:43.213496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:43.213519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:43.213533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:43.213554 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:44.803944 | orchestrator | 2026-04-07 02:13:44.804024 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-07 02:13:44.804033 | orchestrator | Tuesday 07 April 2026 02:13:43 +0000 (0:00:00.896) 0:00:42.021 ********* 2026-04-07 02:13:44.804041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:44.804050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:44.804057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:44.804062 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:44.804069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:44.804075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:44.804092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:44.804112 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:44.804192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:44.804199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:44.804205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:44.804210 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:44.804216 | orchestrator | 2026-04-07 02:13:44.804221 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-07 02:13:44.804226 | orchestrator | Tuesday 07 April 2026 02:13:43 +0000 (0:00:00.623) 0:00:42.644 ********* 2026-04-07 02:13:44.804232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:44.804237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:44.804251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:44.804257 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:44.804271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:45.925749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:45.925842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:45.925854 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:45.925869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:45.925876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:45.925883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:45.925910 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:45.925917 | orchestrator | 2026-04-07 02:13:45.925925 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-07 02:13:45.925932 | orchestrator | Tuesday 07 April 2026 02:13:44 +0000 (0:00:00.974) 0:00:43.619 ********* 2026-04-07 02:13:45.925950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:45.925974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:45.925981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:45.925987 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:45.925993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:45.926000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:45.926063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:45.926075 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:45.926086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:45.926101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:47.350798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:47.350903 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:47.350921 | orchestrator | 2026-04-07 02:13:47.350932 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-07 02:13:47.350944 | orchestrator | Tuesday 07 April 2026 02:13:45 +0000 (0:00:01.117) 0:00:44.737 ********* 2026-04-07 02:13:47.350957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:47.350970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:47.351004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:47.351016 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:47.351026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:47.351049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:47.351072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:47.351078 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:47.351085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:47.351091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:47.351103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:47.351109 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:47.351115 | orchestrator | 2026-04-07 02:13:47.351160 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-07 02:13:47.351167 | orchestrator | Tuesday 07 April 2026 02:13:46 +0000 (0:00:00.596) 0:00:45.333 ********* 2026-04-07 02:13:47.351173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 02:13:47.351179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:47.351197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:54.214778 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:54.214887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 02:13:54.214907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:54.214945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:54.214959 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:54.214970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 02:13:54.214982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 02:13:54.215008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 02:13:54.215020 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:54.215031 | orchestrator | 2026-04-07 02:13:54.215044 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-07 02:13:54.215057 | orchestrator | Tuesday 07 April 2026 02:13:47 +0000 (0:00:00.829) 0:00:46.163 ********* 2026-04-07 02:13:54.215068 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 02:13:54.215097 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 02:13:54.215109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 02:13:54.215203 | orchestrator | 2026-04-07 02:13:54.215219 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-07 02:13:54.215230 | orchestrator | Tuesday 07 April 2026 02:13:49 +0000 (0:00:01.758) 0:00:47.921 ********* 2026-04-07 02:13:54.215242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 02:13:54.215253 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 02:13:54.215264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 02:13:54.215275 | orchestrator | 2026-04-07 02:13:54.215294 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-07 02:13:54.215307 | orchestrator | Tuesday 07 April 2026 02:13:50 +0000 (0:00:01.809) 0:00:49.730 ********* 2026-04-07 02:13:54.215319 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 02:13:54.215332 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 02:13:54.215345 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 02:13:54.215357 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 02:13:54.215369 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:54.215382 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 02:13:54.215395 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:54.215407 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 02:13:54.215420 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:13:54.215432 | orchestrator | 2026-04-07 02:13:54.215445 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-07 02:13:54.215459 | orchestrator | Tuesday 07 April 2026 02:13:51 +0000 (0:00:00.840) 0:00:50.571 ********* 2026-04-07 02:13:54.215473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:54.215487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:54.215506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 02:13:54.215530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:58.659286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:58.659406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 02:13:58.659427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:58.659444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:58.659457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 02:13:58.659472 | orchestrator | 2026-04-07 02:13:58.659487 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-07 02:13:58.659518 | orchestrator | Tuesday 07 April 2026 02:13:54 +0000 (0:00:02.459) 0:00:53.030 ********* 2026-04-07 02:13:58.659530 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:13:58.659542 | orchestrator | 2026-04-07 02:13:58.659554 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-07 02:13:58.659564 | orchestrator | Tuesday 07 April 2026 02:13:55 +0000 (0:00:00.955) 0:00:53.986 ********* 2026-04-07 02:13:58.659597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 02:13:58.659635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 02:13:58.659648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:13:58.659660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 02:13:58.659672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 02:13:58.659689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 02:13:58.659701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:13:58.659728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 02:13:59.367118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 02:13:59.367226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 02:13:59.367236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:13:59.367265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 02:13:59.367276 | orchestrator | 2026-04-07 02:13:59.367285 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-07 02:13:59.367295 | orchestrator | Tuesday 07 April 2026 02:13:58 +0000 (0:00:03.481) 0:00:57.467 ********* 2026-04-07 02:13:59.367321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 02:13:59.367329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 02:13:59.367352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:13:59.367361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 02:13:59.367368 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:13:59.367377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 02:13:59.367390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 02:13:59.367404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:13:59.367411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 02:13:59.367418 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:13:59.367431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 02:14:08.593274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 02:14:08.593391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:14:08.593408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 02:14:08.593444 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:08.593459 | orchestrator | 2026-04-07 02:14:08.593471 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-07 02:14:08.593484 | orchestrator | Tuesday 07 April 2026 02:13:59 +0000 (0:00:00.713) 0:00:58.181 ********* 2026-04-07 02:14:08.593496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-07 02:14:08.593510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-07 02:14:08.593523 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:08.593551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-07 02:14:08.593563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-07 02:14:08.593574 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:08.593586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-07 02:14:08.593597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-07 02:14:08.593608 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:08.593619 | orchestrator | 2026-04-07 02:14:08.593630 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-07 02:14:08.593641 | orchestrator | Tuesday 07 April 2026 02:14:00 +0000 (0:00:01.193) 0:00:59.374 ********* 2026-04-07 02:14:08.593652 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:14:08.593663 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:14:08.593674 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:14:08.593685 | orchestrator | 2026-04-07 02:14:08.593697 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-07 02:14:08.593708 | orchestrator | Tuesday 07 April 2026 02:14:01 +0000 (0:00:01.264) 0:01:00.639 ********* 2026-04-07 02:14:08.593718 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:14:08.593729 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:14:08.593740 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:14:08.593751 | orchestrator | 2026-04-07 02:14:08.593762 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-07 02:14:08.593776 | orchestrator | Tuesday 07 April 2026 02:14:03 +0000 (0:00:02.090) 0:01:02.730 ********* 2026-04-07 02:14:08.593790 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:14:08.593803 | orchestrator | 2026-04-07 02:14:08.593835 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-07 02:14:08.593849 | orchestrator | Tuesday 07 April 2026 02:14:04 +0000 (0:00:00.674) 0:01:03.404 ********* 2026-04-07 02:14:08.593863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 02:14:08.593892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 02:14:08.593906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:14:08.593919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:14:08.593931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:08.593951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:09.247070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 02:14:09.247229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:14:09.247250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:09.247264 | orchestrator | 2026-04-07 02:14:09.247277 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-07 02:14:09.247290 | orchestrator | Tuesday 07 April 2026 02:14:08 +0000 (0:00:04.004) 0:01:07.409 ********* 2026-04-07 02:14:09.247303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 02:14:09.247315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:14:09.247368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:09.247381 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:09.247400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 02:14:09.247412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:14:09.247424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:09.247435 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:09.247447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 02:14:09.247476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 02:14:19.375341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:19.375547 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:19.375569 | orchestrator | 2026-04-07 02:14:19.375583 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-07 02:14:19.375597 | orchestrator | Tuesday 07 April 2026 02:14:09 +0000 (0:00:00.648) 0:01:08.058 ********* 2026-04-07 02:14:19.375625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 02:14:19.375693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 02:14:19.375709 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:19.375721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 02:14:19.375733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 02:14:19.375745 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:19.375756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 02:14:19.375767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-07 02:14:19.375778 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:19.375789 | orchestrator | 2026-04-07 02:14:19.375801 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-07 02:14:19.375812 | orchestrator | Tuesday 07 April 2026 02:14:10 +0000 (0:00:00.830) 0:01:08.889 ********* 2026-04-07 02:14:19.375823 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:14:19.375835 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:14:19.375847 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:14:19.375857 | orchestrator | 2026-04-07 02:14:19.375874 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-07 02:14:19.375893 | orchestrator | Tuesday 07 April 2026 02:14:11 +0000 (0:00:01.643) 0:01:10.532 ********* 2026-04-07 02:14:19.375940 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:14:19.375959 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:14:19.375976 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:14:19.375994 | orchestrator | 2026-04-07 02:14:19.376012 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-07 02:14:19.376031 | orchestrator | Tuesday 07 April 2026 02:14:13 +0000 (0:00:02.103) 0:01:12.636 ********* 2026-04-07 02:14:19.376049 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:19.376068 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:19.376087 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:19.376107 | orchestrator | 2026-04-07 02:14:19.376156 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-07 02:14:19.376170 | orchestrator | Tuesday 07 April 2026 02:14:14 +0000 (0:00:00.319) 0:01:12.956 ********* 2026-04-07 02:14:19.376181 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:14:19.376191 | orchestrator | 2026-04-07 02:14:19.376202 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-07 02:14:19.376213 | orchestrator | Tuesday 07 April 2026 02:14:14 +0000 (0:00:00.704) 0:01:13.661 ********* 2026-04-07 02:14:19.376250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 02:14:19.376274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 02:14:19.376287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 02:14:19.376298 | orchestrator | 2026-04-07 02:14:19.376310 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-07 02:14:19.376321 | orchestrator | Tuesday 07 April 2026 02:14:17 +0000 (0:00:03.060) 0:01:16.721 ********* 2026-04-07 02:14:19.376343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 02:14:19.376355 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:19.376367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 02:14:19.376378 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:19.376398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 02:14:27.760901 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:27.761011 | orchestrator | 2026-04-07 02:14:27.761029 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-07 02:14:27.761042 | orchestrator | Tuesday 07 April 2026 02:14:19 +0000 (0:00:01.471) 0:01:18.192 ********* 2026-04-07 02:14:27.761071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-07 02:14:27.761086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-07 02:14:27.761098 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:27.761173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-07 02:14:27.761187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-07 02:14:27.761197 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:27.761207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-07 02:14:27.761217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-07 02:14:27.761227 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:27.761236 | orchestrator | 2026-04-07 02:14:27.761246 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-07 02:14:27.761256 | orchestrator | Tuesday 07 April 2026 02:14:21 +0000 (0:00:01.893) 0:01:20.085 ********* 2026-04-07 02:14:27.761266 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:27.761276 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:27.761286 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:27.761295 | orchestrator | 2026-04-07 02:14:27.761310 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-07 02:14:27.761320 | orchestrator | Tuesday 07 April 2026 02:14:21 +0000 (0:00:00.424) 0:01:20.510 ********* 2026-04-07 02:14:27.761329 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:27.761339 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:27.761349 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:27.761358 | orchestrator | 2026-04-07 02:14:27.761368 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-07 02:14:27.761377 | orchestrator | Tuesday 07 April 2026 02:14:23 +0000 (0:00:01.415) 0:01:21.926 ********* 2026-04-07 02:14:27.761387 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:14:27.761397 | orchestrator | 2026-04-07 02:14:27.761406 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-07 02:14:27.761416 | orchestrator | Tuesday 07 April 2026 02:14:24 +0000 (0:00:01.033) 0:01:22.959 ********* 2026-04-07 02:14:27.761450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 02:14:27.761475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:14:27.761489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 02:14:27.761502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 02:14:27.761514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 02:14:27.761533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.534698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.534832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.534852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 02:14:28.534866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.534878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.534910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.534930 | orchestrator | 2026-04-07 02:14:28.534950 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-07 02:14:28.534963 | orchestrator | Tuesday 07 April 2026 02:14:27 +0000 (0:00:03.757) 0:01:26.717 ********* 2026-04-07 02:14:28.534975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 02:14:28.534988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.535000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.535011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 02:14:28.535023 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:28.535046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 02:14:35.089649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:14:35.089755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 02:14:35.089776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 02:14:35.089801 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:35.090724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 02:14:35.090787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:14:35.090850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 02:14:35.090861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 02:14:35.090868 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:35.090876 | orchestrator | 2026-04-07 02:14:35.090884 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-07 02:14:35.090893 | orchestrator | Tuesday 07 April 2026 02:14:28 +0000 (0:00:00.739) 0:01:27.457 ********* 2026-04-07 02:14:35.090900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 02:14:35.090908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 02:14:35.090917 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:35.090924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 02:14:35.090931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 02:14:35.090938 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:35.090944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 02:14:35.090951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-07 02:14:35.090958 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:35.090965 | orchestrator | 2026-04-07 02:14:35.090971 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-07 02:14:35.090978 | orchestrator | Tuesday 07 April 2026 02:14:29 +0000 (0:00:01.290) 0:01:28.747 ********* 2026-04-07 02:14:35.090985 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:14:35.090997 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:14:35.091004 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:14:35.091011 | orchestrator | 2026-04-07 02:14:35.091018 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-07 02:14:35.091024 | orchestrator | Tuesday 07 April 2026 02:14:31 +0000 (0:00:01.291) 0:01:30.038 ********* 2026-04-07 02:14:35.091031 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:14:35.091039 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:14:35.091045 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:14:35.091052 | orchestrator | 2026-04-07 02:14:35.091058 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-07 02:14:35.091065 | orchestrator | Tuesday 07 April 2026 02:14:33 +0000 (0:00:02.150) 0:01:32.189 ********* 2026-04-07 02:14:35.091072 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:35.091078 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:35.091085 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:35.091092 | orchestrator | 2026-04-07 02:14:35.091098 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-07 02:14:35.091104 | orchestrator | Tuesday 07 April 2026 02:14:33 +0000 (0:00:00.334) 0:01:32.523 ********* 2026-04-07 02:14:35.091111 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:35.091118 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:35.091124 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:35.091154 | orchestrator | 2026-04-07 02:14:35.091162 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-07 02:14:35.091168 | orchestrator | Tuesday 07 April 2026 02:14:34 +0000 (0:00:00.334) 0:01:32.857 ********* 2026-04-07 02:14:35.091175 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:14:35.091182 | orchestrator | 2026-04-07 02:14:35.091189 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-07 02:14:35.091195 | orchestrator | Tuesday 07 April 2026 02:14:35 +0000 (0:00:01.051) 0:01:33.908 ********* 2026-04-07 02:14:38.513279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 02:14:38.513419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 02:14:38.513452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 02:14:38.513505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 02:14:38.513529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 02:14:38.513585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 02:14:38.513601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 02:14:38.513612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 02:14:38.513624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 02:14:38.513645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:38.513656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 02:14:38.513670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 02:14:38.513696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 02:14:39.564534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 02:14:39.564549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564650 | orchestrator | 2026-04-07 02:14:39.564664 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-07 02:14:39.564676 | orchestrator | Tuesday 07 April 2026 02:14:38 +0000 (0:00:03.717) 0:01:37.626 ********* 2026-04-07 02:14:39.564687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 02:14:39.564699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 02:14:39.564711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.564742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.955525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.955650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 02:14:39.955667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.955680 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:39.955695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 02:14:39.956307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.956337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.956373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.956403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.956422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.956434 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:39.956450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 02:14:39.956465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 02:14:39.956478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 02:14:39.956506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 02:14:50.242779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 02:14:50.242878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:14:50.242890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 02:14:50.242899 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:50.242907 | orchestrator | 2026-04-07 02:14:50.242915 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-07 02:14:50.242923 | orchestrator | Tuesday 07 April 2026 02:14:39 +0000 (0:00:01.143) 0:01:38.770 ********* 2026-04-07 02:14:50.242930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-07 02:14:50.242939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-07 02:14:50.242947 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:50.242953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-07 02:14:50.242959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-07 02:14:50.242965 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:50.242972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-07 02:14:50.242993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-07 02:14:50.242999 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:50.243006 | orchestrator | 2026-04-07 02:14:50.243012 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-07 02:14:50.243018 | orchestrator | Tuesday 07 April 2026 02:14:41 +0000 (0:00:01.393) 0:01:40.164 ********* 2026-04-07 02:14:50.243025 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:14:50.243032 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:14:50.243038 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:14:50.243044 | orchestrator | 2026-04-07 02:14:50.243050 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-07 02:14:50.243057 | orchestrator | Tuesday 07 April 2026 02:14:42 +0000 (0:00:01.298) 0:01:41.462 ********* 2026-04-07 02:14:50.243063 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:14:50.243069 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:14:50.243075 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:14:50.243082 | orchestrator | 2026-04-07 02:14:50.243088 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-07 02:14:50.243094 | orchestrator | Tuesday 07 April 2026 02:14:44 +0000 (0:00:02.103) 0:01:43.566 ********* 2026-04-07 02:14:50.243113 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:50.243120 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:50.243126 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:50.243132 | orchestrator | 2026-04-07 02:14:50.243193 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-07 02:14:50.243199 | orchestrator | Tuesday 07 April 2026 02:14:45 +0000 (0:00:00.340) 0:01:43.906 ********* 2026-04-07 02:14:50.243206 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:14:50.243212 | orchestrator | 2026-04-07 02:14:50.243218 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-07 02:14:50.243224 | orchestrator | Tuesday 07 April 2026 02:14:46 +0000 (0:00:01.136) 0:01:45.042 ********* 2026-04-07 02:14:50.243238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 02:14:50.243247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 02:14:50.243271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 02:14:53.418248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 02:14:53.418413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 02:14:53.419319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 02:14:53.419400 | orchestrator | 2026-04-07 02:14:53.419410 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-07 02:14:53.419419 | orchestrator | Tuesday 07 April 2026 02:14:50 +0000 (0:00:04.145) 0:01:49.187 ********* 2026-04-07 02:14:53.419436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 02:14:53.419453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 02:14:57.369826 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:14:57.369929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 02:14:57.369960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 02:14:57.369991 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:14:57.370093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 02:14:57.370121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 02:14:57.370203 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:14:57.370225 | orchestrator | 2026-04-07 02:14:57.370241 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-07 02:14:57.370257 | orchestrator | Tuesday 07 April 2026 02:14:53 +0000 (0:00:03.150) 0:01:52.338 ********* 2026-04-07 02:14:57.370273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 02:14:57.370296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 02:15:06.328896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 02:15:06.328990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 02:15:06.329000 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:06.329009 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:06.329016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 02:15:06.329037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 02:15:06.329044 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:06.329051 | orchestrator | 2026-04-07 02:15:06.329058 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-07 02:15:06.329067 | orchestrator | Tuesday 07 April 2026 02:14:57 +0000 (0:00:03.847) 0:01:56.185 ********* 2026-04-07 02:15:06.329088 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:06.329095 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:06.329103 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:06.329113 | orchestrator | 2026-04-07 02:15:06.329128 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-07 02:15:06.329163 | orchestrator | Tuesday 07 April 2026 02:14:58 +0000 (0:00:01.321) 0:01:57.507 ********* 2026-04-07 02:15:06.329174 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:06.329185 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:06.329195 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:06.329204 | orchestrator | 2026-04-07 02:15:06.329215 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-07 02:15:06.329224 | orchestrator | Tuesday 07 April 2026 02:15:00 +0000 (0:00:02.165) 0:01:59.673 ********* 2026-04-07 02:15:06.329233 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:06.329243 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:06.329253 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:06.329262 | orchestrator | 2026-04-07 02:15:06.329272 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-07 02:15:06.329282 | orchestrator | Tuesday 07 April 2026 02:15:01 +0000 (0:00:00.354) 0:02:00.027 ********* 2026-04-07 02:15:06.329292 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:15:06.329302 | orchestrator | 2026-04-07 02:15:06.329313 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-07 02:15:06.329324 | orchestrator | Tuesday 07 April 2026 02:15:02 +0000 (0:00:01.115) 0:02:01.142 ********* 2026-04-07 02:15:06.329370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 02:15:06.329385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 02:15:06.329397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 02:15:06.329408 | orchestrator | 2026-04-07 02:15:06.329444 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-07 02:15:06.329469 | orchestrator | Tuesday 07 April 2026 02:15:05 +0000 (0:00:03.340) 0:02:04.483 ********* 2026-04-07 02:15:06.329481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 02:15:06.329494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 02:15:06.329504 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:06.329515 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:06.329525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 02:15:06.329608 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:06.329628 | orchestrator | 2026-04-07 02:15:06.329640 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-07 02:15:06.329651 | orchestrator | Tuesday 07 April 2026 02:15:06 +0000 (0:00:00.431) 0:02:04.914 ********* 2026-04-07 02:15:06.329672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-07 02:15:15.606650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-07 02:15:15.606756 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:15.606769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-07 02:15:15.606780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-07 02:15:15.606788 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:15.606796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-07 02:15:15.606804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-07 02:15:15.606832 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:15.606840 | orchestrator | 2026-04-07 02:15:15.606849 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-07 02:15:15.606859 | orchestrator | Tuesday 07 April 2026 02:15:07 +0000 (0:00:00.947) 0:02:05.862 ********* 2026-04-07 02:15:15.606867 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:15.606875 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:15.606883 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:15.606891 | orchestrator | 2026-04-07 02:15:15.606899 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-07 02:15:15.606907 | orchestrator | Tuesday 07 April 2026 02:15:08 +0000 (0:00:01.347) 0:02:07.210 ********* 2026-04-07 02:15:15.606915 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:15.606923 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:15.606931 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:15.606938 | orchestrator | 2026-04-07 02:15:15.606946 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-07 02:15:15.606967 | orchestrator | Tuesday 07 April 2026 02:15:10 +0000 (0:00:02.083) 0:02:09.294 ********* 2026-04-07 02:15:15.606975 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:15.606983 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:15.606991 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:15.606999 | orchestrator | 2026-04-07 02:15:15.607007 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-07 02:15:15.607014 | orchestrator | Tuesday 07 April 2026 02:15:10 +0000 (0:00:00.351) 0:02:09.645 ********* 2026-04-07 02:15:15.607022 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:15:15.607030 | orchestrator | 2026-04-07 02:15:15.607038 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-07 02:15:15.607046 | orchestrator | Tuesday 07 April 2026 02:15:12 +0000 (0:00:01.257) 0:02:10.903 ********* 2026-04-07 02:15:15.607076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 02:15:15.607101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 02:15:15.607118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 02:15:17.329909 | orchestrator | 2026-04-07 02:15:17.329993 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-07 02:15:17.330003 | orchestrator | Tuesday 07 April 2026 02:15:15 +0000 (0:00:03.520) 0:02:14.423 ********* 2026-04-07 02:15:17.330062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 02:15:17.330074 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:17.330095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 02:15:17.330118 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:17.330129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 02:15:17.330136 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:17.330165 | orchestrator | 2026-04-07 02:15:17.330171 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-07 02:15:17.330177 | orchestrator | Tuesday 07 April 2026 02:15:16 +0000 (0:00:00.680) 0:02:15.104 ********* 2026-04-07 02:15:17.330184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 02:15:17.330198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 02:15:17.330207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 02:15:17.330219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 02:15:26.931612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 02:15:26.931716 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:26.931727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 02:15:26.931737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 02:15:26.931768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 02:15:26.931776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 02:15:26.931783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 02:15:26.931788 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:26.931793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 02:15:26.931798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 02:15:26.931803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-07 02:15:26.931825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 02:15:26.931833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 02:15:26.931841 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:26.931849 | orchestrator | 2026-04-07 02:15:26.931858 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-07 02:15:26.931868 | orchestrator | Tuesday 07 April 2026 02:15:17 +0000 (0:00:01.045) 0:02:16.149 ********* 2026-04-07 02:15:26.931876 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:26.931885 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:26.931892 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:26.931900 | orchestrator | 2026-04-07 02:15:26.931909 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-07 02:15:26.931918 | orchestrator | Tuesday 07 April 2026 02:15:19 +0000 (0:00:01.692) 0:02:17.841 ********* 2026-04-07 02:15:26.931926 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:26.931934 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:26.931942 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:26.931950 | orchestrator | 2026-04-07 02:15:26.931958 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-07 02:15:26.931963 | orchestrator | Tuesday 07 April 2026 02:15:21 +0000 (0:00:02.154) 0:02:19.995 ********* 2026-04-07 02:15:26.931968 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:26.931973 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:26.931991 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:26.931996 | orchestrator | 2026-04-07 02:15:26.932001 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-07 02:15:26.932006 | orchestrator | Tuesday 07 April 2026 02:15:21 +0000 (0:00:00.349) 0:02:20.345 ********* 2026-04-07 02:15:26.932011 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:26.932016 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:26.932021 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:26.932025 | orchestrator | 2026-04-07 02:15:26.932030 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-07 02:15:26.932035 | orchestrator | Tuesday 07 April 2026 02:15:21 +0000 (0:00:00.351) 0:02:20.697 ********* 2026-04-07 02:15:26.932040 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:15:26.932045 | orchestrator | 2026-04-07 02:15:26.932050 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-07 02:15:26.932054 | orchestrator | Tuesday 07 April 2026 02:15:23 +0000 (0:00:01.242) 0:02:21.940 ********* 2026-04-07 02:15:26.932086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:15:26.932124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:15:26.932132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:15:26.932138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:15:26.932189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:15:27.539345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:15:27.539459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:15:27.539510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:15:27.539527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:15:27.539541 | orchestrator | 2026-04-07 02:15:27.539557 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-07 02:15:27.539567 | orchestrator | Tuesday 07 April 2026 02:15:26 +0000 (0:00:03.802) 0:02:25.743 ********* 2026-04-07 02:15:27.539597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:15:27.539626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:15:27.539646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:15:27.539671 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:27.539687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:15:27.539701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:15:27.539714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:15:27.539727 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:27.539761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:15:36.905338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:15:36.905415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:15:36.905427 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:36.905438 | orchestrator | 2026-04-07 02:15:36.905447 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-07 02:15:36.905457 | orchestrator | Tuesday 07 April 2026 02:15:27 +0000 (0:00:00.603) 0:02:26.347 ********* 2026-04-07 02:15:36.905466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 02:15:36.905477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 02:15:36.905487 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:36.905496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 02:15:36.905505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 02:15:36.905513 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:36.905522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 02:15:36.905531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-07 02:15:36.905540 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:36.905554 | orchestrator | 2026-04-07 02:15:36.905566 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-07 02:15:36.905574 | orchestrator | Tuesday 07 April 2026 02:15:28 +0000 (0:00:01.117) 0:02:27.465 ********* 2026-04-07 02:15:36.905582 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:36.905590 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:36.905614 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:36.905623 | orchestrator | 2026-04-07 02:15:36.905631 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-07 02:15:36.905638 | orchestrator | Tuesday 07 April 2026 02:15:29 +0000 (0:00:01.314) 0:02:28.779 ********* 2026-04-07 02:15:36.905646 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:36.905654 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:36.905662 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:36.905670 | orchestrator | 2026-04-07 02:15:36.905678 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-07 02:15:36.905685 | orchestrator | Tuesday 07 April 2026 02:15:32 +0000 (0:00:02.125) 0:02:30.905 ********* 2026-04-07 02:15:36.905693 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:36.905711 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:36.905719 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:36.905727 | orchestrator | 2026-04-07 02:15:36.905735 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-07 02:15:36.905773 | orchestrator | Tuesday 07 April 2026 02:15:32 +0000 (0:00:00.335) 0:02:31.241 ********* 2026-04-07 02:15:36.905784 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:15:36.905792 | orchestrator | 2026-04-07 02:15:36.905800 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-07 02:15:36.905808 | orchestrator | Tuesday 07 April 2026 02:15:33 +0000 (0:00:01.353) 0:02:32.594 ********* 2026-04-07 02:15:36.905817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 02:15:36.905829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:15:36.905838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 02:15:36.905853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 02:15:36.905868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:15:41.942431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:15:41.942540 | orchestrator | 2026-04-07 02:15:41.942558 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-07 02:15:41.942571 | orchestrator | Tuesday 07 April 2026 02:15:36 +0000 (0:00:03.118) 0:02:35.713 ********* 2026-04-07 02:15:41.942580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 02:15:41.942628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:15:41.942656 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:41.942669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 02:15:41.942692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:15:41.942699 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:41.942706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 02:15:41.942713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:15:41.942725 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:41.942732 | orchestrator | 2026-04-07 02:15:41.942739 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-07 02:15:41.942746 | orchestrator | Tuesday 07 April 2026 02:15:37 +0000 (0:00:00.611) 0:02:36.325 ********* 2026-04-07 02:15:41.942755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-07 02:15:41.942763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-07 02:15:41.942771 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:41.942778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-07 02:15:41.942785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-07 02:15:41.942792 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:41.942798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-07 02:15:41.942805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-07 02:15:41.942812 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:41.942818 | orchestrator | 2026-04-07 02:15:41.942829 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-07 02:15:41.942836 | orchestrator | Tuesday 07 April 2026 02:15:38 +0000 (0:00:00.818) 0:02:37.143 ********* 2026-04-07 02:15:41.942843 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:41.942849 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:41.942856 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:41.942863 | orchestrator | 2026-04-07 02:15:41.942869 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-07 02:15:41.942876 | orchestrator | Tuesday 07 April 2026 02:15:39 +0000 (0:00:01.500) 0:02:38.644 ********* 2026-04-07 02:15:41.942883 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:41.942889 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:41.942896 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:41.942902 | orchestrator | 2026-04-07 02:15:41.942909 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-07 02:15:41.942920 | orchestrator | Tuesday 07 April 2026 02:15:41 +0000 (0:00:02.106) 0:02:40.750 ********* 2026-04-07 02:15:46.811967 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:15:46.812111 | orchestrator | 2026-04-07 02:15:46.812130 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-07 02:15:46.812140 | orchestrator | Tuesday 07 April 2026 02:15:43 +0000 (0:00:01.118) 0:02:41.868 ********* 2026-04-07 02:15:46.813081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 02:15:46.813198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:15:46.813213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 02:15:46.813224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 02:15:46.813248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 02:15:46.813280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:15:46.813291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 02:15:46.813307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 02:15:46.813316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 02:15:46.813325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:15:46.813339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 02:15:46.813356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880571 | orchestrator | 2026-04-07 02:15:47.880665 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-07 02:15:47.880678 | orchestrator | Tuesday 07 April 2026 02:15:46 +0000 (0:00:03.861) 0:02:45.730 ********* 2026-04-07 02:15:47.880707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 02:15:47.880719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880748 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:47.880771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 02:15:47.880796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880829 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:47.880837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 02:15:47.880850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 02:15:47.880874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 02:15:59.489574 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:59.489715 | orchestrator | 2026-04-07 02:15:59.489743 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-07 02:15:59.489765 | orchestrator | Tuesday 07 April 2026 02:15:47 +0000 (0:00:01.055) 0:02:46.785 ********* 2026-04-07 02:15:59.489789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-07 02:15:59.489811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-07 02:15:59.489825 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:59.489837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-07 02:15:59.489849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-07 02:15:59.489860 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:59.489871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-07 02:15:59.489882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-07 02:15:59.489893 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:15:59.489904 | orchestrator | 2026-04-07 02:15:59.489916 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-07 02:15:59.489927 | orchestrator | Tuesday 07 April 2026 02:15:48 +0000 (0:00:00.893) 0:02:47.678 ********* 2026-04-07 02:15:59.489938 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:59.489949 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:59.489960 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:59.489971 | orchestrator | 2026-04-07 02:15:59.489982 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-07 02:15:59.489993 | orchestrator | Tuesday 07 April 2026 02:15:50 +0000 (0:00:01.282) 0:02:48.961 ********* 2026-04-07 02:15:59.490004 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:15:59.490070 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:15:59.490084 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:15:59.490097 | orchestrator | 2026-04-07 02:15:59.490109 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-07 02:15:59.490122 | orchestrator | Tuesday 07 April 2026 02:15:52 +0000 (0:00:02.152) 0:02:51.114 ********* 2026-04-07 02:15:59.490135 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:15:59.490147 | orchestrator | 2026-04-07 02:15:59.490185 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-07 02:15:59.490197 | orchestrator | Tuesday 07 April 2026 02:15:53 +0000 (0:00:01.428) 0:02:52.542 ********* 2026-04-07 02:15:59.490211 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 02:15:59.490224 | orchestrator | 2026-04-07 02:15:59.490261 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-07 02:15:59.490274 | orchestrator | Tuesday 07 April 2026 02:15:56 +0000 (0:00:03.236) 0:02:55.779 ********* 2026-04-07 02:15:59.490331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:15:59.490351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 02:15:59.490364 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:15:59.490385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:15:59.490407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 02:15:59.490419 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:15:59.490441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:16:02.046392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 02:16:02.046580 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:02.046611 | orchestrator | 2026-04-07 02:16:02.046625 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-07 02:16:02.046639 | orchestrator | Tuesday 07 April 2026 02:15:59 +0000 (0:00:02.518) 0:02:58.297 ********* 2026-04-07 02:16:02.046725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:16:02.046741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 02:16:02.046753 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:02.046797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:16:02.046833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 02:16:02.046845 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:02.046858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:16:02.046879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 02:16:12.530758 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:12.530871 | orchestrator | 2026-04-07 02:16:12.530886 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-07 02:16:12.530899 | orchestrator | Tuesday 07 April 2026 02:16:02 +0000 (0:00:02.566) 0:03:00.864 ********* 2026-04-07 02:16:12.530910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 02:16:12.530956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 02:16:12.530967 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:12.530976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 02:16:12.530986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 02:16:12.530995 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:12.531004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 02:16:12.531013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 02:16:12.531022 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:12.531031 | orchestrator | 2026-04-07 02:16:12.531040 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-07 02:16:12.531049 | orchestrator | Tuesday 07 April 2026 02:16:05 +0000 (0:00:03.124) 0:03:03.988 ********* 2026-04-07 02:16:12.531058 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:16:12.531089 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:16:12.531099 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:16:12.531108 | orchestrator | 2026-04-07 02:16:12.531117 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-07 02:16:12.531126 | orchestrator | Tuesday 07 April 2026 02:16:07 +0000 (0:00:02.317) 0:03:06.306 ********* 2026-04-07 02:16:12.531134 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:12.531143 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:12.531197 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:12.531208 | orchestrator | 2026-04-07 02:16:12.531217 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-07 02:16:12.531226 | orchestrator | Tuesday 07 April 2026 02:16:08 +0000 (0:00:01.478) 0:03:07.785 ********* 2026-04-07 02:16:12.531235 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:12.531244 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:12.531253 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:12.531262 | orchestrator | 2026-04-07 02:16:12.531271 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-07 02:16:12.531279 | orchestrator | Tuesday 07 April 2026 02:16:09 +0000 (0:00:00.333) 0:03:08.118 ********* 2026-04-07 02:16:12.531288 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:16:12.531303 | orchestrator | 2026-04-07 02:16:12.531317 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-07 02:16:12.531341 | orchestrator | Tuesday 07 April 2026 02:16:10 +0000 (0:00:01.484) 0:03:09.603 ********* 2026-04-07 02:16:12.531365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 02:16:12.531385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 02:16:12.531401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 02:16:12.531414 | orchestrator | 2026-04-07 02:16:12.531428 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-07 02:16:12.531455 | orchestrator | Tuesday 07 April 2026 02:16:12 +0000 (0:00:01.543) 0:03:11.146 ********* 2026-04-07 02:16:12.531482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 02:16:21.281828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 02:16:21.281940 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:21.281959 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:21.281972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 02:16:21.281984 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:21.281995 | orchestrator | 2026-04-07 02:16:21.282008 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-07 02:16:21.282092 | orchestrator | Tuesday 07 April 2026 02:16:12 +0000 (0:00:00.397) 0:03:11.543 ********* 2026-04-07 02:16:21.282107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 02:16:21.282120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 02:16:21.282132 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:21.282143 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:21.282198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 02:16:21.282236 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:21.282248 | orchestrator | 2026-04-07 02:16:21.282300 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-07 02:16:21.282312 | orchestrator | Tuesday 07 April 2026 02:16:13 +0000 (0:00:00.916) 0:03:12.460 ********* 2026-04-07 02:16:21.282323 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:21.282336 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:21.282349 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:21.282362 | orchestrator | 2026-04-07 02:16:21.282400 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-07 02:16:21.282426 | orchestrator | Tuesday 07 April 2026 02:16:14 +0000 (0:00:00.491) 0:03:12.952 ********* 2026-04-07 02:16:21.282449 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:21.282462 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:21.282475 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:21.282487 | orchestrator | 2026-04-07 02:16:21.282499 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-07 02:16:21.282513 | orchestrator | Tuesday 07 April 2026 02:16:15 +0000 (0:00:01.366) 0:03:14.318 ********* 2026-04-07 02:16:21.282527 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:21.282540 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:21.282553 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:21.282565 | orchestrator | 2026-04-07 02:16:21.282582 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-07 02:16:21.282602 | orchestrator | Tuesday 07 April 2026 02:16:15 +0000 (0:00:00.343) 0:03:14.661 ********* 2026-04-07 02:16:21.282630 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:16:21.282653 | orchestrator | 2026-04-07 02:16:21.282671 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-07 02:16:21.282688 | orchestrator | Tuesday 07 April 2026 02:16:17 +0000 (0:00:01.545) 0:03:16.207 ********* 2026-04-07 02:16:21.282734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:16:21.282768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.282790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.282828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.282849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 02:16:21.282889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.466434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:21.466573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:21.466592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.466632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:21.466657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.466669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 02:16:21.466693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:21.466702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.466715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 02:16:21.466734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:21.466742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:16:21.466751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.466765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.578213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.578312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:16:21.578324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 02:16:21.578333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.578341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.578367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:21.578382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.578390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:21.578399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.578407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.578415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 02:16:21.578432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.726648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:21.726752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:21.726769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.726783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:21.726796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 02:16:21.726808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.726885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:21.726900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.726911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:21.726923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:21.726936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 02:16:21.726950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 02:16:21.726983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:22.944223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:22.944332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:22.944350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 02:16:22.944366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:22.944379 | orchestrator | 2026-04-07 02:16:22.944392 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-07 02:16:22.944428 | orchestrator | Tuesday 07 April 2026 02:16:21 +0000 (0:00:04.438) 0:03:20.645 ********* 2026-04-07 02:16:22.944458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:16:22.944489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:22.944504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:22.944516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:22.944527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 02:16:22.944552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:22.944566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:22.944585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:23.029747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.029841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:16:23.029860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:23.029922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.029944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.029984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 02:16:23.030004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.030088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:23.030101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.030130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.030181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 02:16:23.030217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 02:16:23.148488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:16:23.148599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.148657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:23.148697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.148720 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:23.148744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:23.148790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.148804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:23.148816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.148837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.148849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:23.148861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-07 02:16:23.148881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.357726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.357841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:23.357859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 02:16:23.357872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:23.357890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:23.357903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.357915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.357944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:23.357965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 02:16:23.357984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:23.357996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:23.358008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-07 02:16:23.358091 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:23.358115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-07 02:16:34.316812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 02:16:34.316979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 02:16:34.317011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:16:34.317025 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:34.317039 | orchestrator | 2026-04-07 02:16:34.317068 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-07 02:16:34.318011 | orchestrator | Tuesday 07 April 2026 02:16:23 +0000 (0:00:01.527) 0:03:22.173 ********* 2026-04-07 02:16:34.318118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-07 02:16:34.318129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-07 02:16:34.318137 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:34.318146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-07 02:16:34.318152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-07 02:16:34.318187 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:34.318194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-07 02:16:34.318200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-07 02:16:34.318276 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:34.318286 | orchestrator | 2026-04-07 02:16:34.318294 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-07 02:16:34.318301 | orchestrator | Tuesday 07 April 2026 02:16:25 +0000 (0:00:02.138) 0:03:24.311 ********* 2026-04-07 02:16:34.318307 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:16:34.318314 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:16:34.318339 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:16:34.318347 | orchestrator | 2026-04-07 02:16:34.318354 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-07 02:16:34.318361 | orchestrator | Tuesday 07 April 2026 02:16:26 +0000 (0:00:01.385) 0:03:25.697 ********* 2026-04-07 02:16:34.318367 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:16:34.318374 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:16:34.318381 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:16:34.318388 | orchestrator | 2026-04-07 02:16:34.318395 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-07 02:16:34.318402 | orchestrator | Tuesday 07 April 2026 02:16:28 +0000 (0:00:02.071) 0:03:27.769 ********* 2026-04-07 02:16:34.318409 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:16:34.318416 | orchestrator | 2026-04-07 02:16:34.318422 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-07 02:16:34.318428 | orchestrator | Tuesday 07 April 2026 02:16:30 +0000 (0:00:01.289) 0:03:29.058 ********* 2026-04-07 02:16:34.318436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:16:34.318454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:16:34.318460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:16:34.318474 | orchestrator | 2026-04-07 02:16:34.318480 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-07 02:16:34.318487 | orchestrator | Tuesday 07 April 2026 02:16:33 +0000 (0:00:03.524) 0:03:32.583 ********* 2026-04-07 02:16:34.318500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:16:45.537431 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:45.537537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:16:45.537549 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:45.537564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:16:45.537571 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:45.537578 | orchestrator | 2026-04-07 02:16:45.537586 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-07 02:16:45.537595 | orchestrator | Tuesday 07 April 2026 02:16:34 +0000 (0:00:00.549) 0:03:33.132 ********* 2026-04-07 02:16:45.537602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 02:16:45.537626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 02:16:45.537634 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:45.537640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 02:16:45.537645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 02:16:45.537651 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:45.537657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 02:16:45.537663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-07 02:16:45.537670 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:45.537677 | orchestrator | 2026-04-07 02:16:45.537683 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-07 02:16:45.537690 | orchestrator | Tuesday 07 April 2026 02:16:35 +0000 (0:00:00.825) 0:03:33.958 ********* 2026-04-07 02:16:45.537696 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:16:45.537703 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:16:45.537709 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:16:45.537716 | orchestrator | 2026-04-07 02:16:45.537722 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-07 02:16:45.537728 | orchestrator | Tuesday 07 April 2026 02:16:37 +0000 (0:00:01.976) 0:03:35.935 ********* 2026-04-07 02:16:45.537734 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:16:45.537740 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:16:45.537760 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:16:45.537766 | orchestrator | 2026-04-07 02:16:45.537772 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-07 02:16:45.537779 | orchestrator | Tuesday 07 April 2026 02:16:39 +0000 (0:00:02.004) 0:03:37.940 ********* 2026-04-07 02:16:45.537785 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:16:45.537792 | orchestrator | 2026-04-07 02:16:45.537798 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-07 02:16:45.537804 | orchestrator | Tuesday 07 April 2026 02:16:40 +0000 (0:00:01.699) 0:03:39.639 ********* 2026-04-07 02:16:45.537813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 02:16:45.537835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:16:45.537843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:16:45.537855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 02:16:46.908607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:16:46.908714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:16:46.908757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 02:16:46.908766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:16:46.908771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:16:46.908777 | orchestrator | 2026-04-07 02:16:46.908784 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-07 02:16:46.908790 | orchestrator | Tuesday 07 April 2026 02:16:45 +0000 (0:00:04.711) 0:03:44.352 ********* 2026-04-07 02:16:46.908809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 02:16:46.908821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:16:46.908830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:16:46.908836 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:46.908843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 02:16:46.908852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:16:58.204745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:16:58.204884 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:58.204928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 02:16:58.204968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 02:16:58.204982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 02:16:58.204993 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:58.205005 | orchestrator | 2026-04-07 02:16:58.205018 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-07 02:16:58.205031 | orchestrator | Tuesday 07 April 2026 02:16:46 +0000 (0:00:01.365) 0:03:45.717 ********* 2026-04-07 02:16:58.205044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205116 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:16:58.205127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205213 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:16:58.205224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-07 02:16:58.205283 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:16:58.205295 | orchestrator | 2026-04-07 02:16:58.205308 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-07 02:16:58.205321 | orchestrator | Tuesday 07 April 2026 02:16:47 +0000 (0:00:00.972) 0:03:46.689 ********* 2026-04-07 02:16:58.205334 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:16:58.205346 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:16:58.205359 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:16:58.205372 | orchestrator | 2026-04-07 02:16:58.205385 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-07 02:16:58.205398 | orchestrator | Tuesday 07 April 2026 02:16:49 +0000 (0:00:01.467) 0:03:48.157 ********* 2026-04-07 02:16:58.205410 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:16:58.205423 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:16:58.205436 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:16:58.205448 | orchestrator | 2026-04-07 02:16:58.205460 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-07 02:16:58.205473 | orchestrator | Tuesday 07 April 2026 02:16:51 +0000 (0:00:02.218) 0:03:50.375 ********* 2026-04-07 02:16:58.205486 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:16:58.205498 | orchestrator | 2026-04-07 02:16:58.205510 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-07 02:16:58.205523 | orchestrator | Tuesday 07 April 2026 02:16:53 +0000 (0:00:01.666) 0:03:52.042 ********* 2026-04-07 02:16:58.205536 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-07 02:16:58.205550 | orchestrator | 2026-04-07 02:16:58.205562 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-07 02:16:58.205575 | orchestrator | Tuesday 07 April 2026 02:16:54 +0000 (0:00:00.891) 0:03:52.933 ********* 2026-04-07 02:16:58.205590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 02:16:58.205620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 02:17:10.631128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 02:17:10.631315 | orchestrator | 2026-04-07 02:17:10.631326 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-07 02:17:10.631333 | orchestrator | Tuesday 07 April 2026 02:16:58 +0000 (0:00:04.086) 0:03:57.020 ********* 2026-04-07 02:17:10.631339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:10.631345 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:10.631363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:10.631369 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:10.631373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:10.631377 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:10.631382 | orchestrator | 2026-04-07 02:17:10.631386 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-07 02:17:10.631391 | orchestrator | Tuesday 07 April 2026 02:16:59 +0000 (0:00:01.433) 0:03:58.453 ********* 2026-04-07 02:17:10.631397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 02:17:10.631404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 02:17:10.631423 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:10.631428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 02:17:10.631432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 02:17:10.631437 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:10.631441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 02:17:10.631445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 02:17:10.631461 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:10.631466 | orchestrator | 2026-04-07 02:17:10.631470 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 02:17:10.631474 | orchestrator | Tuesday 07 April 2026 02:17:01 +0000 (0:00:01.681) 0:04:00.135 ********* 2026-04-07 02:17:10.631479 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:17:10.631483 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:17:10.631487 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:17:10.631491 | orchestrator | 2026-04-07 02:17:10.631495 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 02:17:10.631499 | orchestrator | Tuesday 07 April 2026 02:17:03 +0000 (0:00:02.540) 0:04:02.676 ********* 2026-04-07 02:17:10.631504 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:17:10.631508 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:17:10.631512 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:17:10.631516 | orchestrator | 2026-04-07 02:17:10.631520 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-07 02:17:10.631524 | orchestrator | Tuesday 07 April 2026 02:17:06 +0000 (0:00:02.962) 0:04:05.639 ********* 2026-04-07 02:17:10.631530 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-07 02:17:10.631535 | orchestrator | 2026-04-07 02:17:10.631539 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-07 02:17:10.631543 | orchestrator | Tuesday 07 April 2026 02:17:07 +0000 (0:00:01.169) 0:04:06.809 ********* 2026-04-07 02:17:10.631552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:10.631557 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:10.631561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:10.631570 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:10.631574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:10.631578 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:10.631583 | orchestrator | 2026-04-07 02:17:10.631587 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-07 02:17:10.631591 | orchestrator | Tuesday 07 April 2026 02:17:09 +0000 (0:00:01.113) 0:04:07.922 ********* 2026-04-07 02:17:10.631595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:10.631599 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:10.631604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:10.631611 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:36.001536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 02:17:36.001622 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:36.001631 | orchestrator | 2026-04-07 02:17:36.001638 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-07 02:17:36.001645 | orchestrator | Tuesday 07 April 2026 02:17:10 +0000 (0:00:01.517) 0:04:09.440 ********* 2026-04-07 02:17:36.001652 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:36.001658 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:36.001663 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:36.001669 | orchestrator | 2026-04-07 02:17:36.001674 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 02:17:36.001680 | orchestrator | Tuesday 07 April 2026 02:17:12 +0000 (0:00:01.732) 0:04:11.173 ********* 2026-04-07 02:17:36.001685 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:17:36.001692 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:17:36.001697 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:17:36.001702 | orchestrator | 2026-04-07 02:17:36.001708 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 02:17:36.001713 | orchestrator | Tuesday 07 April 2026 02:17:15 +0000 (0:00:02.839) 0:04:14.012 ********* 2026-04-07 02:17:36.001736 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:17:36.001741 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:17:36.001747 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:17:36.001752 | orchestrator | 2026-04-07 02:17:36.001768 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-07 02:17:36.001774 | orchestrator | Tuesday 07 April 2026 02:17:17 +0000 (0:00:02.812) 0:04:16.824 ********* 2026-04-07 02:17:36.001779 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-serialproxy) 2026-04-07 02:17:36.001787 | orchestrator | 2026-04-07 02:17:36.001792 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-07 02:17:36.001797 | orchestrator | Tuesday 07 April 2026 02:17:19 +0000 (0:00:01.475) 0:04:18.300 ********* 2026-04-07 02:17:36.001803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 02:17:36.001808 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:36.001814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 02:17:36.001819 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:36.001825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 02:17:36.001830 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:36.001836 | orchestrator | 2026-04-07 02:17:36.001841 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-07 02:17:36.001847 | orchestrator | Tuesday 07 April 2026 02:17:20 +0000 (0:00:01.417) 0:04:19.717 ********* 2026-04-07 02:17:36.001866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 02:17:36.001872 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:36.001877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 02:17:36.001887 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:36.001901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 02:17:36.001906 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:36.001912 | orchestrator | 2026-04-07 02:17:36.001920 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-07 02:17:36.001926 | orchestrator | Tuesday 07 April 2026 02:17:22 +0000 (0:00:01.794) 0:04:21.512 ********* 2026-04-07 02:17:36.001931 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:36.001936 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:36.001941 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:36.001946 | orchestrator | 2026-04-07 02:17:36.001951 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 02:17:36.001956 | orchestrator | Tuesday 07 April 2026 02:17:24 +0000 (0:00:01.952) 0:04:23.464 ********* 2026-04-07 02:17:36.001962 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:17:36.001967 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:17:36.001972 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:17:36.001977 | orchestrator | 2026-04-07 02:17:36.001982 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 02:17:36.001987 | orchestrator | Tuesday 07 April 2026 02:17:27 +0000 (0:00:02.459) 0:04:25.924 ********* 2026-04-07 02:17:36.001992 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:17:36.001998 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:17:36.002003 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:17:36.002008 | orchestrator | 2026-04-07 02:17:36.002055 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-07 02:17:36.002061 | orchestrator | Tuesday 07 April 2026 02:17:30 +0000 (0:00:03.404) 0:04:29.329 ********* 2026-04-07 02:17:36.002066 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:17:36.002072 | orchestrator | 2026-04-07 02:17:36.002077 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-07 02:17:36.002083 | orchestrator | Tuesday 07 April 2026 02:17:32 +0000 (0:00:01.811) 0:04:31.140 ********* 2026-04-07 02:17:36.002091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 02:17:36.002099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 02:17:36.002116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.732307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.732414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:17:36.732428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 02:17:36.732440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 02:17:36.732451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 02:17:36.732496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.732507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 02:17:36.732517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.732526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:17:36.732535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.732573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.732590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:17:36.732600 | orchestrator | 2026-04-07 02:17:36.732611 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-07 02:17:36.732621 | orchestrator | Tuesday 07 April 2026 02:17:36 +0000 (0:00:03.807) 0:04:34.948 ********* 2026-04-07 02:17:36.732652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 02:17:36.871594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 02:17:36.871720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.871747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.871766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:17:36.871815 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:36.871837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 02:17:36.871858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 02:17:36.871918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.871939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.871959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:17:36.871990 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:36.872009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 02:17:36.872028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 02:17:36.872047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 02:17:36.872078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 02:17:49.424917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 02:17:49.425022 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:49.425037 | orchestrator | 2026-04-07 02:17:49.425048 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-07 02:17:49.425059 | orchestrator | Tuesday 07 April 2026 02:17:36 +0000 (0:00:00.739) 0:04:35.688 ********* 2026-04-07 02:17:49.425069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 02:17:49.425099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 02:17:49.425111 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:49.425127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 02:17:49.425139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 02:17:49.425151 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:49.425164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 02:17:49.425262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 02:17:49.425274 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:49.425286 | orchestrator | 2026-04-07 02:17:49.425297 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-07 02:17:49.425308 | orchestrator | Tuesday 07 April 2026 02:17:37 +0000 (0:00:00.990) 0:04:36.679 ********* 2026-04-07 02:17:49.425326 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:17:49.425344 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:17:49.425356 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:17:49.425368 | orchestrator | 2026-04-07 02:17:49.425381 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-07 02:17:49.425393 | orchestrator | Tuesday 07 April 2026 02:17:39 +0000 (0:00:01.809) 0:04:38.488 ********* 2026-04-07 02:17:49.425405 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:17:49.425418 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:17:49.425431 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:17:49.425443 | orchestrator | 2026-04-07 02:17:49.425454 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-07 02:17:49.425466 | orchestrator | Tuesday 07 April 2026 02:17:42 +0000 (0:00:02.564) 0:04:41.052 ********* 2026-04-07 02:17:49.425476 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:17:49.425489 | orchestrator | 2026-04-07 02:17:49.425502 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-07 02:17:49.425516 | orchestrator | Tuesday 07 April 2026 02:17:43 +0000 (0:00:01.476) 0:04:42.529 ********* 2026-04-07 02:17:49.425547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:17:49.425590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:17:49.425618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:17:49.425634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:17:49.425655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:17:49.425679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:17:51.514825 | orchestrator | 2026-04-07 02:17:51.514910 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-07 02:17:51.514921 | orchestrator | Tuesday 07 April 2026 02:17:49 +0000 (0:00:05.697) 0:04:48.226 ********* 2026-04-07 02:17:51.514932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:17:51.514945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:17:51.514954 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:51.514977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:17:51.514986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:17:51.515025 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:51.515034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:17:51.515042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:17:51.515050 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:51.515057 | orchestrator | 2026-04-07 02:17:51.515065 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-07 02:17:51.515073 | orchestrator | Tuesday 07 April 2026 02:17:50 +0000 (0:00:01.132) 0:04:49.359 ********* 2026-04-07 02:17:51.515082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-07 02:17:51.515092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 02:17:51.515102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 02:17:51.515117 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:51.515128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-07 02:17:51.515136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 02:17:51.515144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 02:17:51.515151 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:51.515158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-07 02:17:51.515166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 02:17:51.515210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-07 02:17:57.964887 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:57.964998 | orchestrator | 2026-04-07 02:17:57.965016 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-07 02:17:57.965030 | orchestrator | Tuesday 07 April 2026 02:17:51 +0000 (0:00:00.967) 0:04:50.326 ********* 2026-04-07 02:17:57.965042 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:57.965062 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:57.965081 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:57.965102 | orchestrator | 2026-04-07 02:17:57.965124 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-07 02:17:57.965145 | orchestrator | Tuesday 07 April 2026 02:17:51 +0000 (0:00:00.467) 0:04:50.794 ********* 2026-04-07 02:17:57.965166 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:17:57.965245 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:17:57.965263 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:17:57.965281 | orchestrator | 2026-04-07 02:17:57.965299 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-07 02:17:57.965319 | orchestrator | Tuesday 07 April 2026 02:17:53 +0000 (0:00:01.542) 0:04:52.337 ********* 2026-04-07 02:17:57.965338 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:17:57.965359 | orchestrator | 2026-04-07 02:17:57.965378 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-07 02:17:57.965397 | orchestrator | Tuesday 07 April 2026 02:17:55 +0000 (0:00:01.821) 0:04:54.159 ********* 2026-04-07 02:17:57.965419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 02:17:57.965477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 02:17:57.965507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:57.965521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:57.965535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 02:17:57.965571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 02:17:57.965587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 02:17:57.965600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 02:17:57.965621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 02:17:57.965639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:57.965653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:57.965666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:57.965689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:59.625005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 02:17:59.625104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 02:17:59.625140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 02:17:59.625231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 02:17:59.625246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:59.625257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:59.625281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 02:17:59.625291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 02:17:59.625314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 02:17:59.625324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:59.625333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:17:59.625349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 02:18:00.377017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 02:18:00.377107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 02:18:00.377115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.377131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.377136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 02:18:00.377140 | orchestrator | 2026-04-07 02:18:00.377145 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-07 02:18:00.377151 | orchestrator | Tuesday 07 April 2026 02:17:59 +0000 (0:00:04.424) 0:04:58.583 ********* 2026-04-07 02:18:00.377156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 02:18:00.377217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 02:18:00.377229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.377233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.377238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 02:18:00.377247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 02:18:00.377252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 02:18:00.377261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 02:18:00.556373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 02:18:00.556451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.556472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.556479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.556485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.556490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 02:18:00.556496 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:00.556503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 02:18:00.556537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 02:18:00.556545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 02:18:00.556555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.556560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 02:18:00.556565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:00.556575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 02:18:00.556586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 02:18:02.324216 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:02.325009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:02.325041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:02.325067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 02:18:02.325081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 02:18:02.325093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-07 02:18:02.325121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:02.325148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 02:18:02.325157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 02:18:02.325166 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:02.325191 | orchestrator | 2026-04-07 02:18:02.325200 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-07 02:18:02.325210 | orchestrator | Tuesday 07 April 2026 02:18:00 +0000 (0:00:00.946) 0:04:59.529 ********* 2026-04-07 02:18:02.325224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-07 02:18:02.325236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-07 02:18:02.325247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 02:18:02.325258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 02:18:02.325268 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:02.325276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-07 02:18:02.325290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-07 02:18:02.325299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 02:18:02.325307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 02:18:02.325315 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:02.325323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-07 02:18:02.325343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-07 02:18:02.325359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 02:18:02.325373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-07 02:18:10.491458 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:10.491566 | orchestrator | 2026-04-07 02:18:10.491581 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-07 02:18:10.491593 | orchestrator | Tuesday 07 April 2026 02:18:02 +0000 (0:00:01.603) 0:05:01.133 ********* 2026-04-07 02:18:10.491604 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:10.491614 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:10.491622 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:10.491631 | orchestrator | 2026-04-07 02:18:10.491640 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-07 02:18:10.491650 | orchestrator | Tuesday 07 April 2026 02:18:02 +0000 (0:00:00.485) 0:05:01.618 ********* 2026-04-07 02:18:10.491659 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:10.491669 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:10.491678 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:10.491687 | orchestrator | 2026-04-07 02:18:10.491697 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-07 02:18:10.491707 | orchestrator | Tuesday 07 April 2026 02:18:04 +0000 (0:00:01.505) 0:05:03.124 ********* 2026-04-07 02:18:10.491717 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:18:10.491727 | orchestrator | 2026-04-07 02:18:10.491737 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-07 02:18:10.491747 | orchestrator | Tuesday 07 April 2026 02:18:06 +0000 (0:00:01.870) 0:05:04.995 ********* 2026-04-07 02:18:10.491763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:18:10.491800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:18:10.491854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:18:10.491865 | orchestrator | 2026-04-07 02:18:10.491875 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-07 02:18:10.491903 | orchestrator | Tuesday 07 April 2026 02:18:08 +0000 (0:00:02.227) 0:05:07.223 ********* 2026-04-07 02:18:10.491917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 02:18:10.491935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 02:18:10.491946 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:10.491956 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:10.491965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 02:18:10.491976 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:10.491985 | orchestrator | 2026-04-07 02:18:10.491996 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-07 02:18:10.492006 | orchestrator | Tuesday 07 April 2026 02:18:08 +0000 (0:00:00.524) 0:05:07.748 ********* 2026-04-07 02:18:10.492019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 02:18:10.492031 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:10.492041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 02:18:10.492052 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:10.492061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 02:18:10.492070 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:10.492080 | orchestrator | 2026-04-07 02:18:10.492090 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-07 02:18:10.492101 | orchestrator | Tuesday 07 April 2026 02:18:09 +0000 (0:00:01.040) 0:05:08.789 ********* 2026-04-07 02:18:10.492117 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:21.244764 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:21.244844 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:21.244852 | orchestrator | 2026-04-07 02:18:21.244860 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-07 02:18:21.244867 | orchestrator | Tuesday 07 April 2026 02:18:10 +0000 (0:00:00.521) 0:05:09.310 ********* 2026-04-07 02:18:21.244873 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:21.244892 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:21.244897 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:21.244901 | orchestrator | 2026-04-07 02:18:21.244906 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-07 02:18:21.244910 | orchestrator | Tuesday 07 April 2026 02:18:11 +0000 (0:00:01.476) 0:05:10.787 ********* 2026-04-07 02:18:21.244915 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:18:21.244920 | orchestrator | 2026-04-07 02:18:21.244924 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-07 02:18:21.244929 | orchestrator | Tuesday 07 April 2026 02:18:13 +0000 (0:00:01.564) 0:05:12.352 ********* 2026-04-07 02:18:21.244946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 02:18:21.244955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 02:18:21.244960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 02:18:21.244977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 02:18:21.244993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 02:18:21.244998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 02:18:21.245003 | orchestrator | 2026-04-07 02:18:21.245008 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-07 02:18:21.245013 | orchestrator | Tuesday 07 April 2026 02:18:20 +0000 (0:00:06.979) 0:05:19.331 ********* 2026-04-07 02:18:21.245017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 02:18:21.245026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 02:18:27.279891 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:27.280009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 02:18:27.280028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 02:18:27.280038 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:27.280047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 02:18:27.280055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 02:18:27.280082 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:27.280091 | orchestrator | 2026-04-07 02:18:27.280101 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-07 02:18:27.280111 | orchestrator | Tuesday 07 April 2026 02:18:21 +0000 (0:00:00.729) 0:05:20.060 ********* 2026-04-07 02:18:27.280135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280235 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:27.280244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280277 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:27.280286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-07 02:18:27.280320 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:27.280337 | orchestrator | 2026-04-07 02:18:27.280346 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-07 02:18:27.280355 | orchestrator | Tuesday 07 April 2026 02:18:22 +0000 (0:00:00.939) 0:05:21.000 ********* 2026-04-07 02:18:27.280363 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:18:27.280371 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:18:27.280379 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:18:27.280387 | orchestrator | 2026-04-07 02:18:27.280394 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-07 02:18:27.280403 | orchestrator | Tuesday 07 April 2026 02:18:23 +0000 (0:00:01.311) 0:05:22.311 ********* 2026-04-07 02:18:27.280412 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:18:27.280421 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:18:27.280430 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:18:27.280439 | orchestrator | 2026-04-07 02:18:27.280447 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-07 02:18:27.280455 | orchestrator | Tuesday 07 April 2026 02:18:25 +0000 (0:00:02.344) 0:05:24.656 ********* 2026-04-07 02:18:27.280464 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:27.280472 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:27.280480 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:27.280489 | orchestrator | 2026-04-07 02:18:27.280498 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-07 02:18:27.280506 | orchestrator | Tuesday 07 April 2026 02:18:26 +0000 (0:00:00.703) 0:05:25.359 ********* 2026-04-07 02:18:27.280514 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:27.280522 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:18:27.280529 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:18:27.280538 | orchestrator | 2026-04-07 02:18:27.280546 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-07 02:18:27.280555 | orchestrator | Tuesday 07 April 2026 02:18:26 +0000 (0:00:00.389) 0:05:25.749 ********* 2026-04-07 02:18:27.280564 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:18:27.280583 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.974340 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.974461 | orchestrator | 2026-04-07 02:19:13.974486 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-07 02:19:13.974507 | orchestrator | Tuesday 07 April 2026 02:18:27 +0000 (0:00:00.350) 0:05:26.099 ********* 2026-04-07 02:19:13.974526 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.974545 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.974560 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.974577 | orchestrator | 2026-04-07 02:19:13.974596 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-07 02:19:13.974615 | orchestrator | Tuesday 07 April 2026 02:18:27 +0000 (0:00:00.372) 0:05:26.472 ********* 2026-04-07 02:19:13.974634 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.974654 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.974674 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.974693 | orchestrator | 2026-04-07 02:19:13.974713 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-07 02:19:13.974754 | orchestrator | Tuesday 07 April 2026 02:18:28 +0000 (0:00:00.704) 0:05:27.176 ********* 2026-04-07 02:19:13.974777 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.974799 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.974822 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.974841 | orchestrator | 2026-04-07 02:19:13.974863 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-07 02:19:13.974878 | orchestrator | Tuesday 07 April 2026 02:18:29 +0000 (0:00:00.741) 0:05:27.917 ********* 2026-04-07 02:19:13.974891 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.974905 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.974919 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.974933 | orchestrator | 2026-04-07 02:19:13.974946 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-07 02:19:13.974981 | orchestrator | Tuesday 07 April 2026 02:18:29 +0000 (0:00:00.685) 0:05:28.603 ********* 2026-04-07 02:19:13.974993 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.975004 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.975015 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.975026 | orchestrator | 2026-04-07 02:19:13.975037 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-07 02:19:13.975047 | orchestrator | Tuesday 07 April 2026 02:18:30 +0000 (0:00:00.766) 0:05:29.370 ********* 2026-04-07 02:19:13.975058 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.975069 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.975080 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.975090 | orchestrator | 2026-04-07 02:19:13.975101 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-07 02:19:13.975112 | orchestrator | Tuesday 07 April 2026 02:18:31 +0000 (0:00:00.925) 0:05:30.295 ********* 2026-04-07 02:19:13.975123 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.975133 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.975144 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.975155 | orchestrator | 2026-04-07 02:19:13.975166 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-07 02:19:13.975177 | orchestrator | Tuesday 07 April 2026 02:18:32 +0000 (0:00:00.895) 0:05:31.191 ********* 2026-04-07 02:19:13.975214 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.975225 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.975236 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.975247 | orchestrator | 2026-04-07 02:19:13.975258 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-07 02:19:13.975268 | orchestrator | Tuesday 07 April 2026 02:18:33 +0000 (0:00:00.914) 0:05:32.106 ********* 2026-04-07 02:19:13.975279 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:19:13.975290 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:19:13.975301 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:19:13.975312 | orchestrator | 2026-04-07 02:19:13.975323 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-07 02:19:13.975334 | orchestrator | Tuesday 07 April 2026 02:18:38 +0000 (0:00:04.934) 0:05:37.040 ********* 2026-04-07 02:19:13.975344 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.975355 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.975366 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.975376 | orchestrator | 2026-04-07 02:19:13.975387 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-07 02:19:13.975398 | orchestrator | Tuesday 07 April 2026 02:18:41 +0000 (0:00:03.309) 0:05:40.350 ********* 2026-04-07 02:19:13.975408 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:19:13.975419 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:19:13.975430 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:19:13.975441 | orchestrator | 2026-04-07 02:19:13.975452 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-07 02:19:13.975463 | orchestrator | Tuesday 07 April 2026 02:18:57 +0000 (0:00:16.351) 0:05:56.701 ********* 2026-04-07 02:19:13.975473 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.975484 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.975494 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.975505 | orchestrator | 2026-04-07 02:19:13.975516 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-07 02:19:13.975527 | orchestrator | Tuesday 07 April 2026 02:18:58 +0000 (0:00:00.761) 0:05:57.463 ********* 2026-04-07 02:19:13.975537 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:19:13.975548 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:19:13.975559 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:19:13.975569 | orchestrator | 2026-04-07 02:19:13.975580 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-07 02:19:13.975591 | orchestrator | Tuesday 07 April 2026 02:19:07 +0000 (0:00:09.351) 0:06:06.815 ********* 2026-04-07 02:19:13.975615 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.975626 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.975637 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.975649 | orchestrator | 2026-04-07 02:19:13.975669 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-07 02:19:13.975689 | orchestrator | Tuesday 07 April 2026 02:19:08 +0000 (0:00:00.797) 0:06:07.612 ********* 2026-04-07 02:19:13.975712 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.975738 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.975757 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.975775 | orchestrator | 2026-04-07 02:19:13.975821 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-07 02:19:13.975841 | orchestrator | Tuesday 07 April 2026 02:19:09 +0000 (0:00:00.402) 0:06:08.014 ********* 2026-04-07 02:19:13.975859 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.975877 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.975896 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.975914 | orchestrator | 2026-04-07 02:19:13.975932 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-07 02:19:13.975951 | orchestrator | Tuesday 07 April 2026 02:19:09 +0000 (0:00:00.392) 0:06:08.406 ********* 2026-04-07 02:19:13.975969 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.975988 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.976007 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.976026 | orchestrator | 2026-04-07 02:19:13.976044 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-07 02:19:13.976062 | orchestrator | Tuesday 07 April 2026 02:19:10 +0000 (0:00:00.447) 0:06:08.854 ********* 2026-04-07 02:19:13.976083 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.976116 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.976139 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.976158 | orchestrator | 2026-04-07 02:19:13.976177 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-07 02:19:13.976215 | orchestrator | Tuesday 07 April 2026 02:19:10 +0000 (0:00:00.787) 0:06:09.641 ********* 2026-04-07 02:19:13.976227 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:13.976238 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:13.976249 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:13.976259 | orchestrator | 2026-04-07 02:19:13.976270 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-07 02:19:13.976281 | orchestrator | Tuesday 07 April 2026 02:19:11 +0000 (0:00:00.408) 0:06:10.050 ********* 2026-04-07 02:19:13.976292 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.976303 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.976314 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.976325 | orchestrator | 2026-04-07 02:19:13.976336 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-07 02:19:13.976347 | orchestrator | Tuesday 07 April 2026 02:19:12 +0000 (0:00:00.924) 0:06:10.975 ********* 2026-04-07 02:19:13.976358 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:13.976368 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:13.976379 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:13.976390 | orchestrator | 2026-04-07 02:19:13.976401 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:19:13.976413 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-07 02:19:13.976426 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-07 02:19:13.976437 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-07 02:19:13.976448 | orchestrator | 2026-04-07 02:19:13.976469 | orchestrator | 2026-04-07 02:19:13.976480 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:19:13.976492 | orchestrator | Tuesday 07 April 2026 02:19:12 +0000 (0:00:00.841) 0:06:11.816 ********* 2026-04-07 02:19:13.976502 | orchestrator | =============================================================================== 2026-04-07 02:19:13.976513 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 16.35s 2026-04-07 02:19:13.976524 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.35s 2026-04-07 02:19:13.976535 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.98s 2026-04-07 02:19:13.976546 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.70s 2026-04-07 02:19:13.976556 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.93s 2026-04-07 02:19:13.976567 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.71s 2026-04-07 02:19:13.976578 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.44s 2026-04-07 02:19:13.976589 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.42s 2026-04-07 02:19:13.976599 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.15s 2026-04-07 02:19:13.976610 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2026-04-07 02:19:13.976621 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.00s 2026-04-07 02:19:13.976632 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.86s 2026-04-07 02:19:13.976643 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.85s 2026-04-07 02:19:13.976653 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.81s 2026-04-07 02:19:13.976664 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.80s 2026-04-07 02:19:13.976675 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.76s 2026-04-07 02:19:13.976686 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.72s 2026-04-07 02:19:13.976697 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.52s 2026-04-07 02:19:13.976707 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.52s 2026-04-07 02:19:13.976718 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.48s 2026-04-07 02:19:16.659842 | orchestrator | 2026-04-07 02:19:16 | INFO  | Task 07de6418-c3bd-4942-95f9-225bc6a92fb2 (opensearch) was prepared for execution. 2026-04-07 02:19:16.659963 | orchestrator | 2026-04-07 02:19:16 | INFO  | It takes a moment until task 07de6418-c3bd-4942-95f9-225bc6a92fb2 (opensearch) has been started and output is visible here. 2026-04-07 02:19:29.148693 | orchestrator | 2026-04-07 02:19:29.148787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:19:29.148800 | orchestrator | 2026-04-07 02:19:29.148809 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:19:29.148818 | orchestrator | Tuesday 07 April 2026 02:19:22 +0000 (0:00:00.460) 0:00:00.460 ********* 2026-04-07 02:19:29.148826 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:19:29.148836 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:19:29.148844 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:19:29.148852 | orchestrator | 2026-04-07 02:19:29.148860 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:19:29.148869 | orchestrator | Tuesday 07 April 2026 02:19:22 +0000 (0:00:00.392) 0:00:00.853 ********* 2026-04-07 02:19:29.148891 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-07 02:19:29.148900 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-07 02:19:29.148908 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-07 02:19:29.148916 | orchestrator | 2026-04-07 02:19:29.148924 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-07 02:19:29.148952 | orchestrator | 2026-04-07 02:19:29.148961 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 02:19:29.148968 | orchestrator | Tuesday 07 April 2026 02:19:23 +0000 (0:00:00.464) 0:00:01.318 ********* 2026-04-07 02:19:29.148977 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:19:29.148985 | orchestrator | 2026-04-07 02:19:29.148993 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-07 02:19:29.149001 | orchestrator | Tuesday 07 April 2026 02:19:23 +0000 (0:00:00.502) 0:00:01.821 ********* 2026-04-07 02:19:29.149009 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 02:19:29.149017 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 02:19:29.149026 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 02:19:29.149034 | orchestrator | 2026-04-07 02:19:29.149041 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-07 02:19:29.149050 | orchestrator | Tuesday 07 April 2026 02:19:24 +0000 (0:00:00.760) 0:00:02.581 ********* 2026-04-07 02:19:29.149061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:29.149073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:29.149096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:29.149112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:29.149129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:29.149138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:29.149147 | orchestrator | 2026-04-07 02:19:29.149155 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 02:19:29.149164 | orchestrator | Tuesday 07 April 2026 02:19:26 +0000 (0:00:01.729) 0:00:04.311 ********* 2026-04-07 02:19:29.149171 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:19:29.149179 | orchestrator | 2026-04-07 02:19:29.149237 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-07 02:19:29.149249 | orchestrator | Tuesday 07 April 2026 02:19:26 +0000 (0:00:00.566) 0:00:04.877 ********* 2026-04-07 02:19:29.149270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:30.035554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:30.035656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:30.035674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:30.035687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:30.035751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:30.035764 | orchestrator | 2026-04-07 02:19:30.035775 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-07 02:19:30.035785 | orchestrator | Tuesday 07 April 2026 02:19:29 +0000 (0:00:02.396) 0:00:07.274 ********* 2026-04-07 02:19:30.035796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:19:30.035805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:19:30.035816 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:30.035827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:19:30.035864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:19:31.142689 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:31.142776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:19:31.142793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:19:31.142802 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:31.142810 | orchestrator | 2026-04-07 02:19:31.142819 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-07 02:19:31.142828 | orchestrator | Tuesday 07 April 2026 02:19:30 +0000 (0:00:00.888) 0:00:08.163 ********* 2026-04-07 02:19:31.142857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:19:31.142878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:19:31.142901 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:19:31.142909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:19:31.142917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:19:31.142925 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:19:31.142942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-07 02:19:31.142955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-07 02:19:31.142963 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:19:31.142970 | orchestrator | 2026-04-07 02:19:31.142978 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-07 02:19:31.142993 | orchestrator | Tuesday 07 April 2026 02:19:31 +0000 (0:00:01.101) 0:00:09.264 ********* 2026-04-07 02:19:39.690803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:39.690942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:39.690972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:39.691042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:39.691096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:39.691121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:19:39.691156 | orchestrator | 2026-04-07 02:19:39.691177 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-07 02:19:39.691263 | orchestrator | Tuesday 07 April 2026 02:19:33 +0000 (0:00:02.380) 0:00:11.645 ********* 2026-04-07 02:19:39.691285 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:19:39.691307 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:19:39.691325 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:19:39.691345 | orchestrator | 2026-04-07 02:19:39.691365 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-07 02:19:39.691384 | orchestrator | Tuesday 07 April 2026 02:19:36 +0000 (0:00:02.552) 0:00:14.197 ********* 2026-04-07 02:19:39.691404 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:19:39.691423 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:19:39.691442 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:19:39.691462 | orchestrator | 2026-04-07 02:19:39.691482 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-07 02:19:39.691500 | orchestrator | Tuesday 07 April 2026 02:19:37 +0000 (0:00:01.910) 0:00:16.108 ********* 2026-04-07 02:19:39.691521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:39.691553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:19:39.691589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-07 02:22:20.639604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:22:20.639715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:22:20.639735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-07 02:22:20.639742 | orchestrator | 2026-04-07 02:22:20.639749 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 02:22:20.639755 | orchestrator | Tuesday 07 April 2026 02:19:39 +0000 (0:00:01.712) 0:00:17.820 ********* 2026-04-07 02:22:20.639760 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:22:20.639766 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:22:20.639771 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:22:20.639776 | orchestrator | 2026-04-07 02:22:20.639781 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 02:22:20.639786 | orchestrator | Tuesday 07 April 2026 02:19:39 +0000 (0:00:00.298) 0:00:18.119 ********* 2026-04-07 02:22:20.639791 | orchestrator | 2026-04-07 02:22:20.639796 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 02:22:20.639801 | orchestrator | Tuesday 07 April 2026 02:19:40 +0000 (0:00:00.078) 0:00:18.197 ********* 2026-04-07 02:22:20.639806 | orchestrator | 2026-04-07 02:22:20.639810 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 02:22:20.639820 | orchestrator | Tuesday 07 April 2026 02:19:40 +0000 (0:00:00.088) 0:00:18.286 ********* 2026-04-07 02:22:20.639825 | orchestrator | 2026-04-07 02:22:20.639830 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-07 02:22:20.639846 | orchestrator | Tuesday 07 April 2026 02:19:40 +0000 (0:00:00.069) 0:00:18.355 ********* 2026-04-07 02:22:20.639852 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:22:20.639856 | orchestrator | 2026-04-07 02:22:20.639861 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-07 02:22:20.639866 | orchestrator | Tuesday 07 April 2026 02:19:40 +0000 (0:00:00.213) 0:00:18.569 ********* 2026-04-07 02:22:20.639871 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:22:20.639876 | orchestrator | 2026-04-07 02:22:20.639880 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-07 02:22:20.639885 | orchestrator | Tuesday 07 April 2026 02:19:41 +0000 (0:00:00.751) 0:00:19.320 ********* 2026-04-07 02:22:20.639890 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:22:20.639895 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:22:20.639900 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:22:20.639905 | orchestrator | 2026-04-07 02:22:20.639909 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-07 02:22:20.639914 | orchestrator | Tuesday 07 April 2026 02:20:49 +0000 (0:01:07.942) 0:01:27.263 ********* 2026-04-07 02:22:20.639919 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:22:20.639924 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:22:20.639928 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:22:20.639933 | orchestrator | 2026-04-07 02:22:20.639938 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 02:22:20.639943 | orchestrator | Tuesday 07 April 2026 02:22:09 +0000 (0:01:20.554) 0:02:47.817 ********* 2026-04-07 02:22:20.639948 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:22:20.639953 | orchestrator | 2026-04-07 02:22:20.639958 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-07 02:22:20.639963 | orchestrator | Tuesday 07 April 2026 02:22:10 +0000 (0:00:00.557) 0:02:48.374 ********* 2026-04-07 02:22:20.639968 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:22:20.639973 | orchestrator | 2026-04-07 02:22:20.639978 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-07 02:22:20.639982 | orchestrator | Tuesday 07 April 2026 02:22:12 +0000 (0:00:02.669) 0:02:51.044 ********* 2026-04-07 02:22:20.639987 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:22:20.639992 | orchestrator | 2026-04-07 02:22:20.639997 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-07 02:22:20.640002 | orchestrator | Tuesday 07 April 2026 02:22:15 +0000 (0:00:02.311) 0:02:53.355 ********* 2026-04-07 02:22:20.640007 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:22:20.640012 | orchestrator | 2026-04-07 02:22:20.640016 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-07 02:22:20.640023 | orchestrator | Tuesday 07 April 2026 02:22:18 +0000 (0:00:02.819) 0:02:56.175 ********* 2026-04-07 02:22:20.640031 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:22:20.640040 | orchestrator | 2026-04-07 02:22:20.640048 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:22:20.640057 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 02:22:20.640067 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 02:22:20.640082 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 02:22:20.640091 | orchestrator | 2026-04-07 02:22:20.640099 | orchestrator | 2026-04-07 02:22:20.640112 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:22:20.640120 | orchestrator | Tuesday 07 April 2026 02:22:20 +0000 (0:00:02.572) 0:02:58.747 ********* 2026-04-07 02:22:20.640128 | orchestrator | =============================================================================== 2026-04-07 02:22:20.640137 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.55s 2026-04-07 02:22:20.640145 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.94s 2026-04-07 02:22:20.640153 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.82s 2026-04-07 02:22:20.640160 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.67s 2026-04-07 02:22:20.640168 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2026-04-07 02:22:20.640176 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.55s 2026-04-07 02:22:20.640184 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.40s 2026-04-07 02:22:20.640192 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.38s 2026-04-07 02:22:20.640201 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.31s 2026-04-07 02:22:20.640209 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.91s 2026-04-07 02:22:20.640243 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.73s 2026-04-07 02:22:20.640250 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.71s 2026-04-07 02:22:20.640258 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.10s 2026-04-07 02:22:20.640267 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.89s 2026-04-07 02:22:20.640275 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.76s 2026-04-07 02:22:20.640282 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.75s 2026-04-07 02:22:20.640298 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-04-07 02:22:21.015353 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-04-07 02:22:21.015447 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-04-07 02:22:21.015463 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-04-07 02:22:23.627945 | orchestrator | 2026-04-07 02:22:23 | INFO  | Task 88e9642f-079a-43b2-be75-616577f629c7 (memcached) was prepared for execution. 2026-04-07 02:22:23.628030 | orchestrator | 2026-04-07 02:22:23 | INFO  | It takes a moment until task 88e9642f-079a-43b2-be75-616577f629c7 (memcached) has been started and output is visible here. 2026-04-07 02:22:41.560638 | orchestrator | 2026-04-07 02:22:41.560719 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:22:41.560746 | orchestrator | 2026-04-07 02:22:41.560761 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:22:41.560770 | orchestrator | Tuesday 07 April 2026 02:22:28 +0000 (0:00:00.337) 0:00:00.337 ********* 2026-04-07 02:22:41.560777 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:22:41.560785 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:22:41.560791 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:22:41.560797 | orchestrator | 2026-04-07 02:22:41.560804 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:22:41.560811 | orchestrator | Tuesday 07 April 2026 02:22:28 +0000 (0:00:00.341) 0:00:00.679 ********* 2026-04-07 02:22:41.560818 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-07 02:22:41.560825 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-07 02:22:41.560831 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-07 02:22:41.560837 | orchestrator | 2026-04-07 02:22:41.560843 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-07 02:22:41.560869 | orchestrator | 2026-04-07 02:22:41.560876 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-07 02:22:41.560882 | orchestrator | Tuesday 07 April 2026 02:22:29 +0000 (0:00:00.493) 0:00:01.172 ********* 2026-04-07 02:22:41.560889 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:22:41.560896 | orchestrator | 2026-04-07 02:22:41.560902 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-07 02:22:41.560908 | orchestrator | Tuesday 07 April 2026 02:22:29 +0000 (0:00:00.508) 0:00:01.681 ********* 2026-04-07 02:22:41.560915 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-07 02:22:41.560921 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-07 02:22:41.560928 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-07 02:22:41.560934 | orchestrator | 2026-04-07 02:22:41.560940 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-07 02:22:41.560946 | orchestrator | Tuesday 07 April 2026 02:22:30 +0000 (0:00:00.699) 0:00:02.380 ********* 2026-04-07 02:22:41.560952 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-07 02:22:41.560959 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-07 02:22:41.560965 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-07 02:22:41.560971 | orchestrator | 2026-04-07 02:22:41.560977 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-07 02:22:41.560984 | orchestrator | Tuesday 07 April 2026 02:22:32 +0000 (0:00:01.838) 0:00:04.219 ********* 2026-04-07 02:22:41.561001 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:22:41.561008 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:22:41.561014 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:22:41.561020 | orchestrator | 2026-04-07 02:22:41.561026 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-07 02:22:41.561032 | orchestrator | Tuesday 07 April 2026 02:22:33 +0000 (0:00:01.696) 0:00:05.915 ********* 2026-04-07 02:22:41.561038 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:22:41.561044 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:22:41.561050 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:22:41.561056 | orchestrator | 2026-04-07 02:22:41.561062 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:22:41.561069 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:22:41.561076 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:22:41.561082 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:22:41.561089 | orchestrator | 2026-04-07 02:22:41.561095 | orchestrator | 2026-04-07 02:22:41.561101 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:22:41.561107 | orchestrator | Tuesday 07 April 2026 02:22:41 +0000 (0:00:07.146) 0:00:13.062 ********* 2026-04-07 02:22:41.561113 | orchestrator | =============================================================================== 2026-04-07 02:22:41.561119 | orchestrator | memcached : Restart memcached container --------------------------------- 7.15s 2026-04-07 02:22:41.561125 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.84s 2026-04-07 02:22:41.561131 | orchestrator | memcached : Check memcached container ----------------------------------- 1.70s 2026-04-07 02:22:41.561137 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.70s 2026-04-07 02:22:41.561143 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.51s 2026-04-07 02:22:41.561149 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-04-07 02:22:41.561161 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-07 02:22:44.027847 | orchestrator | 2026-04-07 02:22:44 | INFO  | Task 037bc376-4a63-4669-bb63-ca592c6e7465 (redis) was prepared for execution. 2026-04-07 02:22:44.027944 | orchestrator | 2026-04-07 02:22:44 | INFO  | It takes a moment until task 037bc376-4a63-4669-bb63-ca592c6e7465 (redis) has been started and output is visible here. 2026-04-07 02:22:53.318368 | orchestrator | 2026-04-07 02:22:53.318471 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:22:53.318486 | orchestrator | 2026-04-07 02:22:53.318497 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:22:53.318508 | orchestrator | Tuesday 07 April 2026 02:22:48 +0000 (0:00:00.290) 0:00:00.290 ********* 2026-04-07 02:22:53.318518 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:22:53.318530 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:22:53.318540 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:22:53.318550 | orchestrator | 2026-04-07 02:22:53.318560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:22:53.318570 | orchestrator | Tuesday 07 April 2026 02:22:48 +0000 (0:00:00.318) 0:00:00.609 ********* 2026-04-07 02:22:53.318580 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-07 02:22:53.318590 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-07 02:22:53.318600 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-07 02:22:53.318610 | orchestrator | 2026-04-07 02:22:53.318620 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-07 02:22:53.318630 | orchestrator | 2026-04-07 02:22:53.318640 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-07 02:22:53.318650 | orchestrator | Tuesday 07 April 2026 02:22:49 +0000 (0:00:00.439) 0:00:01.048 ********* 2026-04-07 02:22:53.318659 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:22:53.318670 | orchestrator | 2026-04-07 02:22:53.318680 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-07 02:22:53.318690 | orchestrator | Tuesday 07 April 2026 02:22:49 +0000 (0:00:00.520) 0:00:01.569 ********* 2026-04-07 02:22:53.318703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318814 | orchestrator | 2026-04-07 02:22:53.318824 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-07 02:22:53.318834 | orchestrator | Tuesday 07 April 2026 02:22:50 +0000 (0:00:01.099) 0:00:02.668 ********* 2026-04-07 02:22:53.318845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.318989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:53.319011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.488985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489093 | orchestrator | 2026-04-07 02:22:57.489112 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-07 02:22:57.489126 | orchestrator | Tuesday 07 April 2026 02:22:53 +0000 (0:00:02.469) 0:00:05.137 ********* 2026-04-07 02:22:57.489139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489417 | orchestrator | 2026-04-07 02:22:57.489429 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-07 02:22:57.489440 | orchestrator | Tuesday 07 April 2026 02:22:55 +0000 (0:00:02.480) 0:00:07.618 ********* 2026-04-07 02:22:57.489453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:22:57.489557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 02:23:13.959703 | orchestrator | 2026-04-07 02:23:13.959840 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 02:23:13.959871 | orchestrator | Tuesday 07 April 2026 02:22:57 +0000 (0:00:01.480) 0:00:09.098 ********* 2026-04-07 02:23:13.959892 | orchestrator | 2026-04-07 02:23:13.959908 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 02:23:13.959924 | orchestrator | Tuesday 07 April 2026 02:22:57 +0000 (0:00:00.067) 0:00:09.166 ********* 2026-04-07 02:23:13.959940 | orchestrator | 2026-04-07 02:23:13.959955 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 02:23:13.959970 | orchestrator | Tuesday 07 April 2026 02:22:57 +0000 (0:00:00.068) 0:00:09.234 ********* 2026-04-07 02:23:13.959986 | orchestrator | 2026-04-07 02:23:13.960001 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-07 02:23:13.960016 | orchestrator | Tuesday 07 April 2026 02:22:57 +0000 (0:00:00.070) 0:00:09.305 ********* 2026-04-07 02:23:13.960032 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:23:13.960050 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:23:13.960066 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:23:13.960081 | orchestrator | 2026-04-07 02:23:13.960127 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-07 02:23:13.960144 | orchestrator | Tuesday 07 April 2026 02:23:05 +0000 (0:00:07.943) 0:00:17.249 ********* 2026-04-07 02:23:13.960210 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:23:13.960302 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:23:13.960321 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:23:13.960339 | orchestrator | 2026-04-07 02:23:13.960350 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:23:13.960362 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:23:13.960377 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:23:13.960402 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:23:13.960415 | orchestrator | 2026-04-07 02:23:13.960426 | orchestrator | 2026-04-07 02:23:13.960438 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:23:13.960450 | orchestrator | Tuesday 07 April 2026 02:23:13 +0000 (0:00:08.136) 0:00:25.385 ********* 2026-04-07 02:23:13.960461 | orchestrator | =============================================================================== 2026-04-07 02:23:13.960473 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.14s 2026-04-07 02:23:13.960488 | orchestrator | redis : Restart redis container ----------------------------------------- 7.94s 2026-04-07 02:23:13.960505 | orchestrator | redis : Copying over redis config files --------------------------------- 2.48s 2026-04-07 02:23:13.960520 | orchestrator | redis : Copying over default config.json files -------------------------- 2.47s 2026-04-07 02:23:13.960543 | orchestrator | redis : Check redis containers ------------------------------------------ 1.48s 2026-04-07 02:23:13.960563 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.10s 2026-04-07 02:23:13.960579 | orchestrator | redis : include_tasks --------------------------------------------------- 0.52s 2026-04-07 02:23:13.960594 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-04-07 02:23:13.960610 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-07 02:23:13.960623 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2026-04-07 02:23:16.500395 | orchestrator | 2026-04-07 02:23:16 | INFO  | Task 2e9841f9-0a2d-4781-919d-595e434a709c (mariadb) was prepared for execution. 2026-04-07 02:23:16.500489 | orchestrator | 2026-04-07 02:23:16 | INFO  | It takes a moment until task 2e9841f9-0a2d-4781-919d-595e434a709c (mariadb) has been started and output is visible here. 2026-04-07 02:23:30.769026 | orchestrator | 2026-04-07 02:23:30.769141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:23:30.769158 | orchestrator | 2026-04-07 02:23:30.769171 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:23:30.769182 | orchestrator | Tuesday 07 April 2026 02:23:20 +0000 (0:00:00.188) 0:00:00.188 ********* 2026-04-07 02:23:30.769194 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:23:30.769206 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:23:30.769217 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:23:30.769265 | orchestrator | 2026-04-07 02:23:30.769277 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:23:30.769289 | orchestrator | Tuesday 07 April 2026 02:23:21 +0000 (0:00:00.333) 0:00:00.522 ********* 2026-04-07 02:23:30.769301 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-07 02:23:30.769313 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-07 02:23:30.769324 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-07 02:23:30.769335 | orchestrator | 2026-04-07 02:23:30.769346 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-07 02:23:30.769357 | orchestrator | 2026-04-07 02:23:30.769369 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-07 02:23:30.769403 | orchestrator | Tuesday 07 April 2026 02:23:21 +0000 (0:00:00.591) 0:00:01.113 ********* 2026-04-07 02:23:30.769414 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 02:23:30.769425 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 02:23:30.769436 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 02:23:30.769447 | orchestrator | 2026-04-07 02:23:30.769458 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 02:23:30.769469 | orchestrator | Tuesday 07 April 2026 02:23:22 +0000 (0:00:00.400) 0:00:01.514 ********* 2026-04-07 02:23:30.769481 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:23:30.769493 | orchestrator | 2026-04-07 02:23:30.769504 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-07 02:23:30.769514 | orchestrator | Tuesday 07 April 2026 02:23:22 +0000 (0:00:00.554) 0:00:02.068 ********* 2026-04-07 02:23:30.769547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:23:30.769586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:23:30.769615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:23:30.769628 | orchestrator | 2026-04-07 02:23:30.769640 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-07 02:23:30.769651 | orchestrator | Tuesday 07 April 2026 02:23:25 +0000 (0:00:02.635) 0:00:04.704 ********* 2026-04-07 02:23:30.769662 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:23:30.769674 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:23:30.769685 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:23:30.769696 | orchestrator | 2026-04-07 02:23:30.769707 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-07 02:23:30.769718 | orchestrator | Tuesday 07 April 2026 02:23:26 +0000 (0:00:00.706) 0:00:05.410 ********* 2026-04-07 02:23:30.769728 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:23:30.769739 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:23:30.769750 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:23:30.769761 | orchestrator | 2026-04-07 02:23:30.769771 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-07 02:23:30.769782 | orchestrator | Tuesday 07 April 2026 02:23:27 +0000 (0:00:01.533) 0:00:06.944 ********* 2026-04-07 02:23:30.769803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:23:38.748031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:23:38.748141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:23:38.748174 | orchestrator | 2026-04-07 02:23:38.748187 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-07 02:23:38.748198 | orchestrator | Tuesday 07 April 2026 02:23:30 +0000 (0:00:03.089) 0:00:10.034 ********* 2026-04-07 02:23:38.748207 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:23:38.748218 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:23:38.748259 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:23:38.748268 | orchestrator | 2026-04-07 02:23:38.748277 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-07 02:23:38.748301 | orchestrator | Tuesday 07 April 2026 02:23:31 +0000 (0:00:01.113) 0:00:11.147 ********* 2026-04-07 02:23:38.748311 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:23:38.748320 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:23:38.748329 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:23:38.748338 | orchestrator | 2026-04-07 02:23:38.748347 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 02:23:38.748355 | orchestrator | Tuesday 07 April 2026 02:23:35 +0000 (0:00:03.986) 0:00:15.134 ********* 2026-04-07 02:23:38.748365 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:23:38.748374 | orchestrator | 2026-04-07 02:23:38.748383 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-07 02:23:38.748391 | orchestrator | Tuesday 07 April 2026 02:23:36 +0000 (0:00:00.608) 0:00:15.742 ********* 2026-04-07 02:23:38.748408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:38.748425 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:23:38.748443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:43.729098 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:23:43.729212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:43.729323 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:23:43.729336 | orchestrator | 2026-04-07 02:23:43.729347 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-07 02:23:43.729357 | orchestrator | Tuesday 07 April 2026 02:23:38 +0000 (0:00:02.269) 0:00:18.012 ********* 2026-04-07 02:23:43.729368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:43.729378 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:23:43.729411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:43.729437 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:23:43.729454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:43.729470 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:23:43.729485 | orchestrator | 2026-04-07 02:23:43.729500 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-07 02:23:43.729515 | orchestrator | Tuesday 07 April 2026 02:23:41 +0000 (0:00:02.606) 0:00:20.619 ********* 2026-04-07 02:23:43.729549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:46.586636 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:23:46.586728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:46.586745 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:23:46.586770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 02:23:46.586798 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:23:46.586807 | orchestrator | 2026-04-07 02:23:46.586817 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-07 02:23:46.586827 | orchestrator | Tuesday 07 April 2026 02:23:43 +0000 (0:00:02.379) 0:00:22.998 ********* 2026-04-07 02:23:46.586852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:23:46.586863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:23:46.586891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 02:26:04.588849 | orchestrator | 2026-04-07 02:26:04.588934 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-07 02:26:04.588942 | orchestrator | Tuesday 07 April 2026 02:23:46 +0000 (0:00:02.853) 0:00:25.852 ********* 2026-04-07 02:26:04.588947 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:04.588952 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:26:04.588957 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:26:04.588961 | orchestrator | 2026-04-07 02:26:04.588965 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-07 02:26:04.588969 | orchestrator | Tuesday 07 April 2026 02:23:47 +0000 (0:00:00.822) 0:00:26.674 ********* 2026-04-07 02:26:04.588973 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.588978 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:26:04.588982 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:26:04.588986 | orchestrator | 2026-04-07 02:26:04.588990 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-07 02:26:04.588994 | orchestrator | Tuesday 07 April 2026 02:23:47 +0000 (0:00:00.566) 0:00:27.240 ********* 2026-04-07 02:26:04.588998 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.589002 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:26:04.589006 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:26:04.589010 | orchestrator | 2026-04-07 02:26:04.589013 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-07 02:26:04.589017 | orchestrator | Tuesday 07 April 2026 02:23:48 +0000 (0:00:00.308) 0:00:27.549 ********* 2026-04-07 02:26:04.589023 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-07 02:26:04.589028 | orchestrator | ...ignoring 2026-04-07 02:26:04.589033 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-07 02:26:04.589037 | orchestrator | ...ignoring 2026-04-07 02:26:04.589041 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-07 02:26:04.589045 | orchestrator | ...ignoring 2026-04-07 02:26:04.589062 | orchestrator | 2026-04-07 02:26:04.589066 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-07 02:26:04.589070 | orchestrator | Tuesday 07 April 2026 02:23:59 +0000 (0:00:10.861) 0:00:38.411 ********* 2026-04-07 02:26:04.589074 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.589078 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:26:04.589081 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:26:04.589085 | orchestrator | 2026-04-07 02:26:04.589089 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-07 02:26:04.589093 | orchestrator | Tuesday 07 April 2026 02:23:59 +0000 (0:00:00.468) 0:00:38.879 ********* 2026-04-07 02:26:04.589097 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:04.589101 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:04.589104 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:04.589108 | orchestrator | 2026-04-07 02:26:04.589112 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-07 02:26:04.589116 | orchestrator | Tuesday 07 April 2026 02:24:00 +0000 (0:00:00.676) 0:00:39.556 ********* 2026-04-07 02:26:04.589120 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:04.589124 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:04.589128 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:04.589133 | orchestrator | 2026-04-07 02:26:04.589151 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-07 02:26:04.589158 | orchestrator | Tuesday 07 April 2026 02:24:00 +0000 (0:00:00.448) 0:00:40.005 ********* 2026-04-07 02:26:04.589164 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:04.589170 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:04.589176 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:04.589182 | orchestrator | 2026-04-07 02:26:04.589188 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-07 02:26:04.589193 | orchestrator | Tuesday 07 April 2026 02:24:01 +0000 (0:00:00.460) 0:00:40.465 ********* 2026-04-07 02:26:04.589199 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.589205 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:26:04.589211 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:26:04.589216 | orchestrator | 2026-04-07 02:26:04.589222 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-07 02:26:04.589228 | orchestrator | Tuesday 07 April 2026 02:24:01 +0000 (0:00:00.458) 0:00:40.924 ********* 2026-04-07 02:26:04.589293 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:04.589302 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:04.589308 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:04.589313 | orchestrator | 2026-04-07 02:26:04.589321 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 02:26:04.589327 | orchestrator | Tuesday 07 April 2026 02:24:02 +0000 (0:00:00.716) 0:00:41.640 ********* 2026-04-07 02:26:04.589334 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:04.589341 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:04.589346 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-07 02:26:04.589350 | orchestrator | 2026-04-07 02:26:04.589354 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-07 02:26:04.589358 | orchestrator | Tuesday 07 April 2026 02:24:02 +0000 (0:00:00.420) 0:00:42.061 ********* 2026-04-07 02:26:04.589362 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:04.589365 | orchestrator | 2026-04-07 02:26:04.589369 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-07 02:26:04.589373 | orchestrator | Tuesday 07 April 2026 02:24:13 +0000 (0:00:10.312) 0:00:52.374 ********* 2026-04-07 02:26:04.589377 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.589381 | orchestrator | 2026-04-07 02:26:04.589384 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 02:26:04.589429 | orchestrator | Tuesday 07 April 2026 02:24:13 +0000 (0:00:00.128) 0:00:52.502 ********* 2026-04-07 02:26:04.589434 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:04.589462 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:04.589467 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:04.589472 | orchestrator | 2026-04-07 02:26:04.589478 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-07 02:26:04.589484 | orchestrator | Tuesday 07 April 2026 02:24:14 +0000 (0:00:01.043) 0:00:53.545 ********* 2026-04-07 02:26:04.589491 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:04.589497 | orchestrator | 2026-04-07 02:26:04.589504 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-07 02:26:04.589514 | orchestrator | Tuesday 07 April 2026 02:24:22 +0000 (0:00:08.333) 0:01:01.879 ********* 2026-04-07 02:26:04.589522 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.589528 | orchestrator | 2026-04-07 02:26:04.589535 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-07 02:26:04.589541 | orchestrator | Tuesday 07 April 2026 02:24:24 +0000 (0:00:01.677) 0:01:03.557 ********* 2026-04-07 02:26:04.589573 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.589580 | orchestrator | 2026-04-07 02:26:04.589587 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-07 02:26:04.589594 | orchestrator | Tuesday 07 April 2026 02:24:26 +0000 (0:00:02.649) 0:01:06.207 ********* 2026-04-07 02:26:04.589598 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:04.589603 | orchestrator | 2026-04-07 02:26:04.589608 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-07 02:26:04.589612 | orchestrator | Tuesday 07 April 2026 02:24:27 +0000 (0:00:00.126) 0:01:06.334 ********* 2026-04-07 02:26:04.589617 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:04.589621 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:04.589626 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:04.589630 | orchestrator | 2026-04-07 02:26:04.589634 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-07 02:26:04.589639 | orchestrator | Tuesday 07 April 2026 02:24:27 +0000 (0:00:00.340) 0:01:06.674 ********* 2026-04-07 02:26:04.589643 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:04.589648 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-07 02:26:04.589652 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:26:04.589657 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:26:04.589661 | orchestrator | 2026-04-07 02:26:04.589665 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-07 02:26:04.589670 | orchestrator | skipping: no hosts matched 2026-04-07 02:26:04.589674 | orchestrator | 2026-04-07 02:26:04.589679 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-07 02:26:04.589683 | orchestrator | 2026-04-07 02:26:04.589687 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 02:26:04.589692 | orchestrator | Tuesday 07 April 2026 02:24:27 +0000 (0:00:00.576) 0:01:07.251 ********* 2026-04-07 02:26:04.589697 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:26:04.589701 | orchestrator | 2026-04-07 02:26:04.589705 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 02:26:04.589709 | orchestrator | Tuesday 07 April 2026 02:24:46 +0000 (0:00:18.509) 0:01:25.760 ********* 2026-04-07 02:26:04.589714 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:26:04.589718 | orchestrator | 2026-04-07 02:26:04.589723 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 02:26:04.589727 | orchestrator | Tuesday 07 April 2026 02:25:03 +0000 (0:00:16.599) 0:01:42.359 ********* 2026-04-07 02:26:04.589732 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:26:04.589736 | orchestrator | 2026-04-07 02:26:04.589743 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-07 02:26:04.589747 | orchestrator | 2026-04-07 02:26:04.589759 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 02:26:04.589763 | orchestrator | Tuesday 07 April 2026 02:25:05 +0000 (0:00:02.554) 0:01:44.914 ********* 2026-04-07 02:26:04.589774 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:26:04.589778 | orchestrator | 2026-04-07 02:26:04.589783 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 02:26:04.589788 | orchestrator | Tuesday 07 April 2026 02:25:23 +0000 (0:00:18.355) 0:02:03.269 ********* 2026-04-07 02:26:04.589792 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:26:04.589796 | orchestrator | 2026-04-07 02:26:04.589800 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 02:26:04.589804 | orchestrator | Tuesday 07 April 2026 02:25:40 +0000 (0:00:16.581) 0:02:19.850 ********* 2026-04-07 02:26:04.589808 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:26:04.589825 | orchestrator | 2026-04-07 02:26:04.589830 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-07 02:26:04.589833 | orchestrator | 2026-04-07 02:26:04.589837 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 02:26:04.589841 | orchestrator | Tuesday 07 April 2026 02:25:43 +0000 (0:00:02.592) 0:02:22.443 ********* 2026-04-07 02:26:04.589845 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:04.589848 | orchestrator | 2026-04-07 02:26:04.589852 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 02:26:04.589856 | orchestrator | Tuesday 07 April 2026 02:25:55 +0000 (0:00:12.285) 0:02:34.728 ********* 2026-04-07 02:26:04.589860 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.589864 | orchestrator | 2026-04-07 02:26:04.589867 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 02:26:04.589871 | orchestrator | Tuesday 07 April 2026 02:26:01 +0000 (0:00:05.602) 0:02:40.331 ********* 2026-04-07 02:26:04.589875 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:04.589879 | orchestrator | 2026-04-07 02:26:04.589882 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-07 02:26:04.589886 | orchestrator | 2026-04-07 02:26:04.589890 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-07 02:26:04.589894 | orchestrator | Tuesday 07 April 2026 02:26:04 +0000 (0:00:02.974) 0:02:43.305 ********* 2026-04-07 02:26:04.589897 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:26:04.589901 | orchestrator | 2026-04-07 02:26:04.589905 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-07 02:26:04.589914 | orchestrator | Tuesday 07 April 2026 02:26:04 +0000 (0:00:00.546) 0:02:43.852 ********* 2026-04-07 02:26:18.215396 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:18.215489 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:18.215500 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:18.215508 | orchestrator | 2026-04-07 02:26:18.215517 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-07 02:26:18.215526 | orchestrator | Tuesday 07 April 2026 02:26:07 +0000 (0:00:02.583) 0:02:46.435 ********* 2026-04-07 02:26:18.215534 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:18.215541 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:18.215548 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:18.215555 | orchestrator | 2026-04-07 02:26:18.215563 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-07 02:26:18.215570 | orchestrator | Tuesday 07 April 2026 02:26:09 +0000 (0:00:02.260) 0:02:48.696 ********* 2026-04-07 02:26:18.215577 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:18.215585 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:18.215592 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:18.215599 | orchestrator | 2026-04-07 02:26:18.215606 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-07 02:26:18.215614 | orchestrator | Tuesday 07 April 2026 02:26:12 +0000 (0:00:02.603) 0:02:51.300 ********* 2026-04-07 02:26:18.215621 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:18.215628 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:18.215635 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:18.215664 | orchestrator | 2026-04-07 02:26:18.215672 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-07 02:26:18.215679 | orchestrator | Tuesday 07 April 2026 02:26:14 +0000 (0:00:02.320) 0:02:53.620 ********* 2026-04-07 02:26:18.215686 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:26:18.215697 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:18.215708 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:26:18.215723 | orchestrator | 2026-04-07 02:26:18.215742 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-07 02:26:18.215754 | orchestrator | Tuesday 07 April 2026 02:26:17 +0000 (0:00:03.024) 0:02:56.645 ********* 2026-04-07 02:26:18.215765 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:18.215776 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:18.215788 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:18.215798 | orchestrator | 2026-04-07 02:26:18.215809 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:26:18.215823 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-07 02:26:18.215836 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-07 02:26:18.215848 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-07 02:26:18.215859 | orchestrator | 2026-04-07 02:26:18.215871 | orchestrator | 2026-04-07 02:26:18.215883 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:26:18.215895 | orchestrator | Tuesday 07 April 2026 02:26:17 +0000 (0:00:00.451) 0:02:57.096 ********* 2026-04-07 02:26:18.215905 | orchestrator | =============================================================================== 2026-04-07 02:26:18.215924 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.87s 2026-04-07 02:26:18.215932 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.18s 2026-04-07 02:26:18.215939 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.29s 2026-04-07 02:26:18.215946 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2026-04-07 02:26:18.215953 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.31s 2026-04-07 02:26:18.215960 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.33s 2026-04-07 02:26:18.215968 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.60s 2026-04-07 02:26:18.215975 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.15s 2026-04-07 02:26:18.215983 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.99s 2026-04-07 02:26:18.215990 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.09s 2026-04-07 02:26:18.215997 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.02s 2026-04-07 02:26:18.216004 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.97s 2026-04-07 02:26:18.216011 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.85s 2026-04-07 02:26:18.216018 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.65s 2026-04-07 02:26:18.216025 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.64s 2026-04-07 02:26:18.216032 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.61s 2026-04-07 02:26:18.216040 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.60s 2026-04-07 02:26:18.216047 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.58s 2026-04-07 02:26:18.216054 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.38s 2026-04-07 02:26:18.216069 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.32s 2026-04-07 02:26:20.774495 | orchestrator | 2026-04-07 02:26:20 | INFO  | Task ffc3df99-a394-479f-9350-9c7601c3171f (rabbitmq) was prepared for execution. 2026-04-07 02:26:20.774584 | orchestrator | 2026-04-07 02:26:20 | INFO  | It takes a moment until task ffc3df99-a394-479f-9350-9c7601c3171f (rabbitmq) has been started and output is visible here. 2026-04-07 02:26:34.414434 | orchestrator | 2026-04-07 02:26:34.414569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:26:34.414599 | orchestrator | 2026-04-07 02:26:34.414620 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:26:34.414642 | orchestrator | Tuesday 07 April 2026 02:26:25 +0000 (0:00:00.179) 0:00:00.179 ********* 2026-04-07 02:26:34.414664 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:34.414687 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:26:34.414708 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:26:34.414729 | orchestrator | 2026-04-07 02:26:34.414749 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:26:34.414769 | orchestrator | Tuesday 07 April 2026 02:26:25 +0000 (0:00:00.309) 0:00:00.488 ********* 2026-04-07 02:26:34.414791 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-07 02:26:34.414812 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-07 02:26:34.414834 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-07 02:26:34.414855 | orchestrator | 2026-04-07 02:26:34.414873 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-07 02:26:34.414887 | orchestrator | 2026-04-07 02:26:34.414900 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 02:26:34.414913 | orchestrator | Tuesday 07 April 2026 02:26:26 +0000 (0:00:00.572) 0:00:01.060 ********* 2026-04-07 02:26:34.414928 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:26:34.414941 | orchestrator | 2026-04-07 02:26:34.414955 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-07 02:26:34.414968 | orchestrator | Tuesday 07 April 2026 02:26:26 +0000 (0:00:00.543) 0:00:01.604 ********* 2026-04-07 02:26:34.414981 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:34.414994 | orchestrator | 2026-04-07 02:26:34.415007 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-07 02:26:34.415019 | orchestrator | Tuesday 07 April 2026 02:26:27 +0000 (0:00:01.005) 0:00:02.609 ********* 2026-04-07 02:26:34.415032 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:34.415046 | orchestrator | 2026-04-07 02:26:34.415059 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-07 02:26:34.415072 | orchestrator | Tuesday 07 April 2026 02:26:27 +0000 (0:00:00.379) 0:00:02.989 ********* 2026-04-07 02:26:34.415084 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:34.415096 | orchestrator | 2026-04-07 02:26:34.415109 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-07 02:26:34.415121 | orchestrator | Tuesday 07 April 2026 02:26:28 +0000 (0:00:00.393) 0:00:03.382 ********* 2026-04-07 02:26:34.415134 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:34.415148 | orchestrator | 2026-04-07 02:26:34.415159 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-07 02:26:34.415170 | orchestrator | Tuesday 07 April 2026 02:26:28 +0000 (0:00:00.357) 0:00:03.740 ********* 2026-04-07 02:26:34.415182 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:34.415192 | orchestrator | 2026-04-07 02:26:34.415203 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 02:26:34.415214 | orchestrator | Tuesday 07 April 2026 02:26:29 +0000 (0:00:00.579) 0:00:04.320 ********* 2026-04-07 02:26:34.415272 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:26:34.415312 | orchestrator | 2026-04-07 02:26:34.415324 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-07 02:26:34.415336 | orchestrator | Tuesday 07 April 2026 02:26:30 +0000 (0:00:00.873) 0:00:05.194 ********* 2026-04-07 02:26:34.415354 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:26:34.415374 | orchestrator | 2026-04-07 02:26:34.415393 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-07 02:26:34.415412 | orchestrator | Tuesday 07 April 2026 02:26:31 +0000 (0:00:00.903) 0:00:06.098 ********* 2026-04-07 02:26:34.415429 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:34.415446 | orchestrator | 2026-04-07 02:26:34.415457 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-07 02:26:34.415468 | orchestrator | Tuesday 07 April 2026 02:26:31 +0000 (0:00:00.426) 0:00:06.524 ********* 2026-04-07 02:26:34.415478 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:34.415489 | orchestrator | 2026-04-07 02:26:34.415500 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-07 02:26:34.415510 | orchestrator | Tuesday 07 April 2026 02:26:31 +0000 (0:00:00.374) 0:00:06.898 ********* 2026-04-07 02:26:34.415549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:34.415567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:34.415587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:34.415610 | orchestrator | 2026-04-07 02:26:34.415622 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-07 02:26:34.415633 | orchestrator | Tuesday 07 April 2026 02:26:32 +0000 (0:00:00.846) 0:00:07.745 ********* 2026-04-07 02:26:34.415645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:34.415667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:53.407845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:53.408805 | orchestrator | 2026-04-07 02:26:53.408854 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-07 02:26:53.408919 | orchestrator | Tuesday 07 April 2026 02:26:34 +0000 (0:00:01.662) 0:00:09.407 ********* 2026-04-07 02:26:53.408940 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 02:26:53.408960 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 02:26:53.408981 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 02:26:53.409001 | orchestrator | 2026-04-07 02:26:53.409021 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-07 02:26:53.409042 | orchestrator | Tuesday 07 April 2026 02:26:35 +0000 (0:00:01.513) 0:00:10.921 ********* 2026-04-07 02:26:53.409079 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 02:26:53.409097 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 02:26:53.409109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 02:26:53.409120 | orchestrator | 2026-04-07 02:26:53.409130 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-07 02:26:53.409141 | orchestrator | Tuesday 07 April 2026 02:26:37 +0000 (0:00:01.707) 0:00:12.629 ********* 2026-04-07 02:26:53.409152 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 02:26:53.409163 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 02:26:53.409174 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 02:26:53.409185 | orchestrator | 2026-04-07 02:26:53.409195 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-07 02:26:53.409206 | orchestrator | Tuesday 07 April 2026 02:26:39 +0000 (0:00:01.404) 0:00:14.033 ********* 2026-04-07 02:26:53.409217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 02:26:53.409228 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 02:26:53.409239 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 02:26:53.409317 | orchestrator | 2026-04-07 02:26:53.409329 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-07 02:26:53.409339 | orchestrator | Tuesday 07 April 2026 02:26:40 +0000 (0:00:01.666) 0:00:15.700 ********* 2026-04-07 02:26:53.409350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 02:26:53.409361 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 02:26:53.409372 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 02:26:53.409383 | orchestrator | 2026-04-07 02:26:53.409394 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-07 02:26:53.409406 | orchestrator | Tuesday 07 April 2026 02:26:42 +0000 (0:00:01.366) 0:00:17.067 ********* 2026-04-07 02:26:53.409427 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 02:26:53.409439 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 02:26:53.409453 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 02:26:53.409493 | orchestrator | 2026-04-07 02:26:53.409519 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 02:26:53.409538 | orchestrator | Tuesday 07 April 2026 02:26:43 +0000 (0:00:01.370) 0:00:18.437 ********* 2026-04-07 02:26:53.409556 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:26:53.409575 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:26:53.409618 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:26:53.409654 | orchestrator | 2026-04-07 02:26:53.409673 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-07 02:26:53.409693 | orchestrator | Tuesday 07 April 2026 02:26:43 +0000 (0:00:00.438) 0:00:18.876 ********* 2026-04-07 02:26:53.409716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:53.409739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:53.409761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 02:26:53.409780 | orchestrator | 2026-04-07 02:26:53.409874 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-07 02:26:53.409900 | orchestrator | Tuesday 07 April 2026 02:26:45 +0000 (0:00:01.285) 0:00:20.161 ********* 2026-04-07 02:26:53.409912 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:53.409924 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:26:53.409935 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:26:53.409946 | orchestrator | 2026-04-07 02:26:53.409965 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-07 02:26:53.410001 | orchestrator | Tuesday 07 April 2026 02:26:45 +0000 (0:00:00.830) 0:00:20.991 ********* 2026-04-07 02:26:53.410145 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:26:53.410169 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:26:53.410180 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:26:53.410191 | orchestrator | 2026-04-07 02:26:53.410202 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-07 02:26:53.410227 | orchestrator | Tuesday 07 April 2026 02:26:53 +0000 (0:00:07.409) 0:00:28.400 ********* 2026-04-07 02:28:29.833625 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:28:29.833729 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:28:29.833746 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:28:29.833760 | orchestrator | 2026-04-07 02:28:29.833777 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 02:28:29.833795 | orchestrator | 2026-04-07 02:28:29.833808 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 02:28:29.833822 | orchestrator | Tuesday 07 April 2026 02:26:53 +0000 (0:00:00.538) 0:00:28.939 ********* 2026-04-07 02:28:29.833836 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:28:29.833856 | orchestrator | 2026-04-07 02:28:29.833875 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 02:28:29.833889 | orchestrator | Tuesday 07 April 2026 02:26:54 +0000 (0:00:00.618) 0:00:29.558 ********* 2026-04-07 02:28:29.833904 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:28:29.833917 | orchestrator | 2026-04-07 02:28:29.833931 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 02:28:29.833944 | orchestrator | Tuesday 07 April 2026 02:26:54 +0000 (0:00:00.246) 0:00:29.804 ********* 2026-04-07 02:28:29.833959 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:28:29.833974 | orchestrator | 2026-04-07 02:28:29.833989 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 02:28:29.834002 | orchestrator | Tuesday 07 April 2026 02:27:01 +0000 (0:00:06.707) 0:00:36.512 ********* 2026-04-07 02:28:29.834063 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:28:29.834074 | orchestrator | 2026-04-07 02:28:29.834082 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 02:28:29.834090 | orchestrator | 2026-04-07 02:28:29.834098 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 02:28:29.834106 | orchestrator | Tuesday 07 April 2026 02:27:52 +0000 (0:00:51.102) 0:01:27.614 ********* 2026-04-07 02:28:29.834114 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:28:29.834122 | orchestrator | 2026-04-07 02:28:29.834130 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 02:28:29.834138 | orchestrator | Tuesday 07 April 2026 02:27:53 +0000 (0:00:00.655) 0:01:28.270 ********* 2026-04-07 02:28:29.834146 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:28:29.834154 | orchestrator | 2026-04-07 02:28:29.834161 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 02:28:29.834170 | orchestrator | Tuesday 07 April 2026 02:27:53 +0000 (0:00:00.258) 0:01:28.528 ********* 2026-04-07 02:28:29.834179 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:28:29.834189 | orchestrator | 2026-04-07 02:28:29.834198 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 02:28:29.834221 | orchestrator | Tuesday 07 April 2026 02:28:00 +0000 (0:00:06.579) 0:01:35.108 ********* 2026-04-07 02:28:29.834231 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:28:29.834241 | orchestrator | 2026-04-07 02:28:29.834250 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 02:28:29.834286 | orchestrator | 2026-04-07 02:28:29.834296 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 02:28:29.834305 | orchestrator | Tuesday 07 April 2026 02:28:09 +0000 (0:00:09.824) 0:01:44.932 ********* 2026-04-07 02:28:29.834315 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:28:29.834324 | orchestrator | 2026-04-07 02:28:29.834354 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 02:28:29.834364 | orchestrator | Tuesday 07 April 2026 02:28:10 +0000 (0:00:00.788) 0:01:45.721 ********* 2026-04-07 02:28:29.834373 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:28:29.834383 | orchestrator | 2026-04-07 02:28:29.834392 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 02:28:29.834401 | orchestrator | Tuesday 07 April 2026 02:28:10 +0000 (0:00:00.244) 0:01:45.965 ********* 2026-04-07 02:28:29.834411 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:28:29.834421 | orchestrator | 2026-04-07 02:28:29.834430 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 02:28:29.834439 | orchestrator | Tuesday 07 April 2026 02:28:12 +0000 (0:00:01.637) 0:01:47.602 ********* 2026-04-07 02:28:29.834448 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:28:29.834458 | orchestrator | 2026-04-07 02:28:29.834467 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-07 02:28:29.834476 | orchestrator | 2026-04-07 02:28:29.834485 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-07 02:28:29.834494 | orchestrator | Tuesday 07 April 2026 02:28:26 +0000 (0:00:13.875) 0:02:01.478 ********* 2026-04-07 02:28:29.834506 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:28:29.834519 | orchestrator | 2026-04-07 02:28:29.834531 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-07 02:28:29.834543 | orchestrator | Tuesday 07 April 2026 02:28:27 +0000 (0:00:00.555) 0:02:02.033 ********* 2026-04-07 02:28:29.834555 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-07 02:28:29.834566 | orchestrator | enable_outward_rabbitmq_True 2026-04-07 02:28:29.834578 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-07 02:28:29.834589 | orchestrator | outward_rabbitmq_restart 2026-04-07 02:28:29.834601 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:28:29.834613 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:28:29.834625 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:28:29.834637 | orchestrator | 2026-04-07 02:28:29.834649 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-07 02:28:29.834661 | orchestrator | skipping: no hosts matched 2026-04-07 02:28:29.834674 | orchestrator | 2026-04-07 02:28:29.834686 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-07 02:28:29.834699 | orchestrator | skipping: no hosts matched 2026-04-07 02:28:29.834712 | orchestrator | 2026-04-07 02:28:29.834726 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-07 02:28:29.834739 | orchestrator | skipping: no hosts matched 2026-04-07 02:28:29.834752 | orchestrator | 2026-04-07 02:28:29.834765 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:28:29.834802 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-07 02:28:29.834817 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:28:29.834826 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:28:29.834834 | orchestrator | 2026-04-07 02:28:29.834842 | orchestrator | 2026-04-07 02:28:29.834850 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:28:29.834995 | orchestrator | Tuesday 07 April 2026 02:28:29 +0000 (0:00:02.425) 0:02:04.458 ********* 2026-04-07 02:28:29.835006 | orchestrator | =============================================================================== 2026-04-07 02:28:29.835014 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 74.80s 2026-04-07 02:28:29.835022 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.92s 2026-04-07 02:28:29.835042 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.41s 2026-04-07 02:28:29.835050 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.43s 2026-04-07 02:28:29.835058 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.06s 2026-04-07 02:28:29.835066 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.71s 2026-04-07 02:28:29.835074 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.67s 2026-04-07 02:28:29.835082 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.66s 2026-04-07 02:28:29.835090 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.51s 2026-04-07 02:28:29.835097 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.40s 2026-04-07 02:28:29.835105 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.37s 2026-04-07 02:28:29.835113 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.37s 2026-04-07 02:28:29.835121 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.29s 2026-04-07 02:28:29.835129 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2026-04-07 02:28:29.835144 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.90s 2026-04-07 02:28:29.835153 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.87s 2026-04-07 02:28:29.835161 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.85s 2026-04-07 02:28:29.835169 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.83s 2026-04-07 02:28:29.835176 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.75s 2026-04-07 02:28:29.835184 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.58s 2026-04-07 02:28:32.475466 | orchestrator | 2026-04-07 02:28:32 | INFO  | Task aaaaca8e-ba7f-43d5-8121-e48f70b94430 (openvswitch) was prepared for execution. 2026-04-07 02:28:32.475536 | orchestrator | 2026-04-07 02:28:32 | INFO  | It takes a moment until task aaaaca8e-ba7f-43d5-8121-e48f70b94430 (openvswitch) has been started and output is visible here. 2026-04-07 02:28:46.043020 | orchestrator | 2026-04-07 02:28:46.043125 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:28:46.043141 | orchestrator | 2026-04-07 02:28:46.043152 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:28:46.043163 | orchestrator | Tuesday 07 April 2026 02:28:37 +0000 (0:00:00.271) 0:00:00.271 ********* 2026-04-07 02:28:46.043172 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:28:46.043184 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:28:46.043194 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:28:46.043204 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:28:46.043213 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:28:46.043223 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:28:46.043232 | orchestrator | 2026-04-07 02:28:46.043242 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:28:46.043252 | orchestrator | Tuesday 07 April 2026 02:28:37 +0000 (0:00:00.709) 0:00:00.981 ********* 2026-04-07 02:28:46.043310 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 02:28:46.043321 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 02:28:46.043331 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 02:28:46.043341 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 02:28:46.043350 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 02:28:46.043360 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 02:28:46.043392 | orchestrator | 2026-04-07 02:28:46.043402 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-07 02:28:46.043412 | orchestrator | 2026-04-07 02:28:46.043422 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-07 02:28:46.043432 | orchestrator | Tuesday 07 April 2026 02:28:38 +0000 (0:00:00.683) 0:00:01.664 ********* 2026-04-07 02:28:46.043443 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:28:46.043454 | orchestrator | 2026-04-07 02:28:46.043463 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-07 02:28:46.043473 | orchestrator | Tuesday 07 April 2026 02:28:39 +0000 (0:00:01.207) 0:00:02.872 ********* 2026-04-07 02:28:46.043482 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-07 02:28:46.043493 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-07 02:28:46.043502 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-07 02:28:46.043511 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-07 02:28:46.043521 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-07 02:28:46.043530 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-07 02:28:46.043540 | orchestrator | 2026-04-07 02:28:46.043549 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-07 02:28:46.043559 | orchestrator | Tuesday 07 April 2026 02:28:40 +0000 (0:00:01.270) 0:00:04.142 ********* 2026-04-07 02:28:46.043570 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-07 02:28:46.043582 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-07 02:28:46.043593 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-07 02:28:46.043605 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-07 02:28:46.043615 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-07 02:28:46.043626 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-07 02:28:46.043677 | orchestrator | 2026-04-07 02:28:46.043700 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-07 02:28:46.043711 | orchestrator | Tuesday 07 April 2026 02:28:42 +0000 (0:00:01.579) 0:00:05.722 ********* 2026-04-07 02:28:46.043722 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-07 02:28:46.043734 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:28:46.043746 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-07 02:28:46.043757 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:28:46.043768 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-07 02:28:46.043780 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:28:46.043791 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-07 02:28:46.043802 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:28:46.043814 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-07 02:28:46.043825 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:28:46.043836 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-07 02:28:46.043848 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:28:46.043860 | orchestrator | 2026-04-07 02:28:46.043873 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-07 02:28:46.043884 | orchestrator | Tuesday 07 April 2026 02:28:43 +0000 (0:00:01.273) 0:00:06.996 ********* 2026-04-07 02:28:46.043894 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:28:46.043903 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:28:46.043913 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:28:46.043923 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:28:46.043932 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:28:46.043942 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:28:46.043951 | orchestrator | 2026-04-07 02:28:46.043961 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-07 02:28:46.043980 | orchestrator | Tuesday 07 April 2026 02:28:44 +0000 (0:00:00.818) 0:00:07.814 ********* 2026-04-07 02:28:46.044011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:46.044027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:46.044038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:46.044125 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:46.044148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:46.044168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445335 | orchestrator | 2026-04-07 02:28:48.445358 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-07 02:28:48.445382 | orchestrator | Tuesday 07 April 2026 02:28:46 +0000 (0:00:01.496) 0:00:09.310 ********* 2026-04-07 02:28:48.445402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:48.445502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404382 | orchestrator | 2026-04-07 02:28:51.404389 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-07 02:28:51.404397 | orchestrator | Tuesday 07 April 2026 02:28:48 +0000 (0:00:02.405) 0:00:11.716 ********* 2026-04-07 02:28:51.404404 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:28:51.404412 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:28:51.404418 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:28:51.404424 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:28:51.404430 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:28:51.404436 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:28:51.404443 | orchestrator | 2026-04-07 02:28:51.404449 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-07 02:28:51.404456 | orchestrator | Tuesday 07 April 2026 02:28:49 +0000 (0:00:01.160) 0:00:12.876 ********* 2026-04-07 02:28:51.404462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:28:51.404508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:29:17.016934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 02:29:17.017064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:29:17.017083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:29:17.017137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:29:17.017150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:29:17.017182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:29:17.017194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 02:29:17.017206 | orchestrator | 2026-04-07 02:29:17.017220 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 02:29:17.017234 | orchestrator | Tuesday 07 April 2026 02:28:51 +0000 (0:00:01.828) 0:00:14.704 ********* 2026-04-07 02:29:17.017247 | orchestrator | 2026-04-07 02:29:17.017260 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 02:29:17.017384 | orchestrator | Tuesday 07 April 2026 02:28:51 +0000 (0:00:00.329) 0:00:15.034 ********* 2026-04-07 02:29:17.017411 | orchestrator | 2026-04-07 02:29:17.017423 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 02:29:17.017435 | orchestrator | Tuesday 07 April 2026 02:28:51 +0000 (0:00:00.135) 0:00:15.169 ********* 2026-04-07 02:29:17.017447 | orchestrator | 2026-04-07 02:29:17.017459 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 02:29:17.017471 | orchestrator | Tuesday 07 April 2026 02:28:52 +0000 (0:00:00.126) 0:00:15.296 ********* 2026-04-07 02:29:17.017483 | orchestrator | 2026-04-07 02:29:17.017495 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 02:29:17.017508 | orchestrator | Tuesday 07 April 2026 02:28:52 +0000 (0:00:00.147) 0:00:15.444 ********* 2026-04-07 02:29:17.017522 | orchestrator | 2026-04-07 02:29:17.017536 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 02:29:17.017550 | orchestrator | Tuesday 07 April 2026 02:28:52 +0000 (0:00:00.137) 0:00:15.581 ********* 2026-04-07 02:29:17.017565 | orchestrator | 2026-04-07 02:29:17.017579 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-07 02:29:17.017592 | orchestrator | Tuesday 07 April 2026 02:28:52 +0000 (0:00:00.130) 0:00:15.712 ********* 2026-04-07 02:29:17.017604 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:29:17.017619 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:29:17.017632 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:29:17.017644 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:29:17.017656 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:29:17.017668 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:29:17.017681 | orchestrator | 2026-04-07 02:29:17.017694 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-07 02:29:17.017707 | orchestrator | Tuesday 07 April 2026 02:29:01 +0000 (0:00:08.543) 0:00:24.256 ********* 2026-04-07 02:29:17.017729 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:29:17.017744 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:29:17.017756 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:29:17.017769 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:29:17.017783 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:29:17.017796 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:29:17.017808 | orchestrator | 2026-04-07 02:29:17.017820 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-07 02:29:17.017832 | orchestrator | Tuesday 07 April 2026 02:29:02 +0000 (0:00:01.111) 0:00:25.367 ********* 2026-04-07 02:29:17.017845 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:29:17.017857 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:29:17.017869 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:29:17.017882 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:29:17.017896 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:29:17.017908 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:29:17.017921 | orchestrator | 2026-04-07 02:29:17.017934 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-07 02:29:17.017946 | orchestrator | Tuesday 07 April 2026 02:29:10 +0000 (0:00:08.184) 0:00:33.551 ********* 2026-04-07 02:29:17.017958 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-07 02:29:17.017970 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-07 02:29:17.017982 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-07 02:29:17.017995 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-07 02:29:17.018008 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-07 02:29:17.018081 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-07 02:29:17.018096 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-07 02:29:17.018134 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-07 02:29:30.195926 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-07 02:29:30.196016 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-07 02:29:30.196025 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-07 02:29:30.196032 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-07 02:29:30.196038 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 02:29:30.196044 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 02:29:30.196050 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 02:29:30.196056 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 02:29:30.196062 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 02:29:30.196068 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 02:29:30.196074 | orchestrator | 2026-04-07 02:29:30.196081 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-07 02:29:30.196088 | orchestrator | Tuesday 07 April 2026 02:29:17 +0000 (0:00:06.645) 0:00:40.197 ********* 2026-04-07 02:29:30.196095 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-07 02:29:30.196102 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:29:30.196109 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-07 02:29:30.196115 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:29:30.196121 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-07 02:29:30.196127 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:29:30.196133 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-07 02:29:30.196139 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-07 02:29:30.196145 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-07 02:29:30.196150 | orchestrator | 2026-04-07 02:29:30.196156 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-07 02:29:30.196162 | orchestrator | Tuesday 07 April 2026 02:29:19 +0000 (0:00:02.450) 0:00:42.648 ********* 2026-04-07 02:29:30.196168 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-07 02:29:30.196174 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:29:30.196180 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-07 02:29:30.196186 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:29:30.196192 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-07 02:29:30.196198 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:29:30.196204 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-07 02:29:30.196210 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-07 02:29:30.196228 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-07 02:29:30.196234 | orchestrator | 2026-04-07 02:29:30.196240 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-07 02:29:30.196246 | orchestrator | Tuesday 07 April 2026 02:29:22 +0000 (0:00:03.089) 0:00:45.737 ********* 2026-04-07 02:29:30.196252 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:29:30.196258 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:29:30.196307 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:29:30.196317 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:29:30.196323 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:29:30.196329 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:29:30.196335 | orchestrator | 2026-04-07 02:29:30.196341 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:29:30.196348 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 02:29:30.196355 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 02:29:30.196361 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 02:29:30.196367 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 02:29:30.196373 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 02:29:30.196379 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 02:29:30.196384 | orchestrator | 2026-04-07 02:29:30.196390 | orchestrator | 2026-04-07 02:29:30.196396 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:29:30.196402 | orchestrator | Tuesday 07 April 2026 02:29:29 +0000 (0:00:07.176) 0:00:52.913 ********* 2026-04-07 02:29:30.196420 | orchestrator | =============================================================================== 2026-04-07 02:29:30.196426 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.36s 2026-04-07 02:29:30.196432 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.54s 2026-04-07 02:29:30.196438 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.65s 2026-04-07 02:29:30.196466 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.09s 2026-04-07 02:29:30.196472 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.45s 2026-04-07 02:29:30.196478 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.41s 2026-04-07 02:29:30.196484 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.83s 2026-04-07 02:29:30.196491 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.58s 2026-04-07 02:29:30.196497 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.50s 2026-04-07 02:29:30.196504 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.27s 2026-04-07 02:29:30.196510 | orchestrator | module-load : Load modules ---------------------------------------------- 1.27s 2026-04-07 02:29:30.196517 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.21s 2026-04-07 02:29:30.196524 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.16s 2026-04-07 02:29:30.196531 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.11s 2026-04-07 02:29:30.196538 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.01s 2026-04-07 02:29:30.196544 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.82s 2026-04-07 02:29:30.196551 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-04-07 02:29:30.196557 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-04-07 02:29:32.754682 | orchestrator | 2026-04-07 02:29:32 | INFO  | Task 6e76e96c-876b-4f0e-b31a-c1b34a758119 (ovn) was prepared for execution. 2026-04-07 02:29:32.754784 | orchestrator | 2026-04-07 02:29:32 | INFO  | It takes a moment until task 6e76e96c-876b-4f0e-b31a-c1b34a758119 (ovn) has been started and output is visible here. 2026-04-07 02:29:43.863765 | orchestrator | 2026-04-07 02:29:43.863908 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:29:43.863929 | orchestrator | 2026-04-07 02:29:43.863954 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:29:43.863971 | orchestrator | Tuesday 07 April 2026 02:29:37 +0000 (0:00:00.172) 0:00:00.172 ********* 2026-04-07 02:29:43.863987 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:29:43.864002 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:29:43.864017 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:29:43.864031 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:29:43.864045 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:29:43.864060 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:29:43.864074 | orchestrator | 2026-04-07 02:29:43.864089 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:29:43.864103 | orchestrator | Tuesday 07 April 2026 02:29:37 +0000 (0:00:00.733) 0:00:00.905 ********* 2026-04-07 02:29:43.864135 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-07 02:29:43.864151 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-07 02:29:43.864165 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-07 02:29:43.864180 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-07 02:29:43.864194 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-07 02:29:43.864208 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-07 02:29:43.864223 | orchestrator | 2026-04-07 02:29:43.864238 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-07 02:29:43.864252 | orchestrator | 2026-04-07 02:29:43.864267 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-07 02:29:43.864357 | orchestrator | Tuesday 07 April 2026 02:29:38 +0000 (0:00:00.864) 0:00:01.770 ********* 2026-04-07 02:29:43.864373 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:29:43.864390 | orchestrator | 2026-04-07 02:29:43.864405 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-07 02:29:43.864419 | orchestrator | Tuesday 07 April 2026 02:29:39 +0000 (0:00:01.164) 0:00:02.934 ********* 2026-04-07 02:29:43.864438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864583 | orchestrator | 2026-04-07 02:29:43.864599 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-07 02:29:43.864614 | orchestrator | Tuesday 07 April 2026 02:29:41 +0000 (0:00:01.226) 0:00:04.160 ********* 2026-04-07 02:29:43.864636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864733 | orchestrator | 2026-04-07 02:29:43.864747 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-07 02:29:43.864762 | orchestrator | Tuesday 07 April 2026 02:29:42 +0000 (0:00:01.510) 0:00:05.671 ********* 2026-04-07 02:29:43.864776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:29:43.864814 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661561 | orchestrator | 2026-04-07 02:30:08.661583 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-07 02:30:08.661603 | orchestrator | Tuesday 07 April 2026 02:29:43 +0000 (0:00:01.158) 0:00:06.830 ********* 2026-04-07 02:30:08.661621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661732 | orchestrator | 2026-04-07 02:30:08.661743 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-07 02:30:08.661754 | orchestrator | Tuesday 07 April 2026 02:29:45 +0000 (0:00:01.505) 0:00:08.335 ********* 2026-04-07 02:30:08.661773 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:30:08.661849 | orchestrator | 2026-04-07 02:30:08.661860 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-07 02:30:08.661871 | orchestrator | Tuesday 07 April 2026 02:29:46 +0000 (0:00:01.370) 0:00:09.705 ********* 2026-04-07 02:30:08.661885 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:30:08.661899 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:30:08.661912 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:30:08.661926 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:30:08.661939 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:30:08.661952 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:30:08.661965 | orchestrator | 2026-04-07 02:30:08.661979 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-07 02:30:08.661992 | orchestrator | Tuesday 07 April 2026 02:29:49 +0000 (0:00:02.470) 0:00:12.176 ********* 2026-04-07 02:30:08.662005 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-07 02:30:08.662114 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-07 02:30:08.662128 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-07 02:30:08.662141 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-07 02:30:08.662154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-07 02:30:08.662167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-07 02:30:08.662188 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 02:30:48.601210 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 02:30:48.601346 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 02:30:48.601372 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 02:30:48.601380 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 02:30:48.601386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 02:30:48.601393 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 02:30:48.601401 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 02:30:48.601426 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 02:30:48.601432 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 02:30:48.601439 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 02:30:48.601445 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-07 02:30:48.601452 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 02:30:48.601459 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 02:30:48.601466 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 02:30:48.601472 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 02:30:48.601479 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 02:30:48.601485 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 02:30:48.601491 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 02:30:48.601498 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 02:30:48.601504 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 02:30:48.601510 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 02:30:48.601516 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 02:30:48.601522 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 02:30:48.601528 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 02:30:48.601535 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 02:30:48.601541 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 02:30:48.601547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 02:30:48.601553 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 02:30:48.601559 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 02:30:48.601565 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 02:30:48.601572 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 02:30:48.601578 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 02:30:48.601584 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 02:30:48.601591 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 02:30:48.601597 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-07 02:30:48.601604 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 02:30:48.601629 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-07 02:30:48.601636 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-07 02:30:48.601646 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-07 02:30:48.601652 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-07 02:30:48.601658 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 02:30:48.601665 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 02:30:48.601671 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-07 02:30:48.601677 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 02:30:48.601683 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 02:30:48.601689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 02:30:48.601695 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 02:30:48.601702 | orchestrator | 2026-04-07 02:30:48.601708 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 02:30:48.601715 | orchestrator | Tuesday 07 April 2026 02:30:08 +0000 (0:00:18.859) 0:00:31.035 ********* 2026-04-07 02:30:48.601721 | orchestrator | 2026-04-07 02:30:48.601727 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 02:30:48.601733 | orchestrator | Tuesday 07 April 2026 02:30:08 +0000 (0:00:00.248) 0:00:31.283 ********* 2026-04-07 02:30:48.601739 | orchestrator | 2026-04-07 02:30:48.601745 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 02:30:48.601751 | orchestrator | Tuesday 07 April 2026 02:30:08 +0000 (0:00:00.066) 0:00:31.350 ********* 2026-04-07 02:30:48.601758 | orchestrator | 2026-04-07 02:30:48.601766 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 02:30:48.601773 | orchestrator | Tuesday 07 April 2026 02:30:08 +0000 (0:00:00.068) 0:00:31.418 ********* 2026-04-07 02:30:48.601780 | orchestrator | 2026-04-07 02:30:48.601787 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 02:30:48.601794 | orchestrator | Tuesday 07 April 2026 02:30:08 +0000 (0:00:00.065) 0:00:31.483 ********* 2026-04-07 02:30:48.601801 | orchestrator | 2026-04-07 02:30:48.601808 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 02:30:48.601816 | orchestrator | Tuesday 07 April 2026 02:30:08 +0000 (0:00:00.069) 0:00:31.552 ********* 2026-04-07 02:30:48.601823 | orchestrator | 2026-04-07 02:30:48.601830 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-07 02:30:48.601837 | orchestrator | Tuesday 07 April 2026 02:30:08 +0000 (0:00:00.068) 0:00:31.621 ********* 2026-04-07 02:30:48.601845 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:30:48.601852 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:30:48.601860 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:30:48.601867 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:30:48.601874 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:30:48.601881 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:30:48.601888 | orchestrator | 2026-04-07 02:30:48.601896 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-07 02:30:48.601903 | orchestrator | Tuesday 07 April 2026 02:30:10 +0000 (0:00:01.654) 0:00:33.276 ********* 2026-04-07 02:30:48.601914 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:30:48.601922 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:30:48.601929 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:30:48.601936 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:30:48.601944 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:30:48.601951 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:30:48.601959 | orchestrator | 2026-04-07 02:30:48.601966 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-07 02:30:48.601973 | orchestrator | 2026-04-07 02:30:48.601981 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 02:30:48.601988 | orchestrator | Tuesday 07 April 2026 02:30:46 +0000 (0:00:36.012) 0:01:09.288 ********* 2026-04-07 02:30:48.601995 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:30:48.602002 | orchestrator | 2026-04-07 02:30:48.602012 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 02:30:48.602096 | orchestrator | Tuesday 07 April 2026 02:30:47 +0000 (0:00:00.721) 0:01:10.010 ********* 2026-04-07 02:30:48.602103 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:30:48.602109 | orchestrator | 2026-04-07 02:30:48.602115 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-07 02:30:48.602121 | orchestrator | Tuesday 07 April 2026 02:30:47 +0000 (0:00:00.574) 0:01:10.584 ********* 2026-04-07 02:30:48.602128 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:30:48.602134 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:30:48.602140 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:30:48.602146 | orchestrator | 2026-04-07 02:30:48.602152 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-07 02:30:48.602164 | orchestrator | Tuesday 07 April 2026 02:30:48 +0000 (0:00:00.978) 0:01:11.563 ********* 2026-04-07 02:31:00.272358 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:00.272440 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:00.272449 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:00.272456 | orchestrator | 2026-04-07 02:31:00.272463 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-07 02:31:00.272483 | orchestrator | Tuesday 07 April 2026 02:30:48 +0000 (0:00:00.341) 0:01:11.905 ********* 2026-04-07 02:31:00.272489 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:00.272495 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:00.272501 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:00.272507 | orchestrator | 2026-04-07 02:31:00.272513 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-07 02:31:00.272519 | orchestrator | Tuesday 07 April 2026 02:30:49 +0000 (0:00:00.345) 0:01:12.250 ********* 2026-04-07 02:31:00.272526 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:00.272532 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:00.272538 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:00.272543 | orchestrator | 2026-04-07 02:31:00.272549 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-07 02:31:00.272555 | orchestrator | Tuesday 07 April 2026 02:30:49 +0000 (0:00:00.329) 0:01:12.580 ********* 2026-04-07 02:31:00.272561 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:00.272567 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:00.272572 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:00.272578 | orchestrator | 2026-04-07 02:31:00.272584 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-07 02:31:00.272590 | orchestrator | Tuesday 07 April 2026 02:30:50 +0000 (0:00:00.547) 0:01:13.127 ********* 2026-04-07 02:31:00.272596 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272603 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272609 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272615 | orchestrator | 2026-04-07 02:31:00.272620 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-07 02:31:00.272643 | orchestrator | Tuesday 07 April 2026 02:30:50 +0000 (0:00:00.337) 0:01:13.465 ********* 2026-04-07 02:31:00.272650 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272657 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272666 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272676 | orchestrator | 2026-04-07 02:31:00.272686 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-07 02:31:00.272692 | orchestrator | Tuesday 07 April 2026 02:30:50 +0000 (0:00:00.313) 0:01:13.778 ********* 2026-04-07 02:31:00.272698 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272704 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272710 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272716 | orchestrator | 2026-04-07 02:31:00.272722 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-07 02:31:00.272727 | orchestrator | Tuesday 07 April 2026 02:30:51 +0000 (0:00:00.349) 0:01:14.128 ********* 2026-04-07 02:31:00.272733 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272739 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272745 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272751 | orchestrator | 2026-04-07 02:31:00.272757 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-07 02:31:00.272762 | orchestrator | Tuesday 07 April 2026 02:30:51 +0000 (0:00:00.321) 0:01:14.450 ********* 2026-04-07 02:31:00.272768 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272775 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272781 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272786 | orchestrator | 2026-04-07 02:31:00.272792 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-07 02:31:00.272798 | orchestrator | Tuesday 07 April 2026 02:30:52 +0000 (0:00:00.553) 0:01:15.004 ********* 2026-04-07 02:31:00.272804 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272810 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272816 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272822 | orchestrator | 2026-04-07 02:31:00.272827 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-07 02:31:00.272833 | orchestrator | Tuesday 07 April 2026 02:30:52 +0000 (0:00:00.293) 0:01:15.297 ********* 2026-04-07 02:31:00.272839 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272845 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272851 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272857 | orchestrator | 2026-04-07 02:31:00.272863 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-07 02:31:00.272868 | orchestrator | Tuesday 07 April 2026 02:30:52 +0000 (0:00:00.326) 0:01:15.624 ********* 2026-04-07 02:31:00.272874 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272880 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272886 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272893 | orchestrator | 2026-04-07 02:31:00.272900 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-07 02:31:00.272908 | orchestrator | Tuesday 07 April 2026 02:30:52 +0000 (0:00:00.317) 0:01:15.942 ********* 2026-04-07 02:31:00.272915 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272922 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272928 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272936 | orchestrator | 2026-04-07 02:31:00.272943 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-07 02:31:00.272950 | orchestrator | Tuesday 07 April 2026 02:30:53 +0000 (0:00:00.557) 0:01:16.500 ********* 2026-04-07 02:31:00.272956 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.272963 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.272970 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.272977 | orchestrator | 2026-04-07 02:31:00.272984 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-07 02:31:00.272996 | orchestrator | Tuesday 07 April 2026 02:30:53 +0000 (0:00:00.328) 0:01:16.828 ********* 2026-04-07 02:31:00.273003 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.273010 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.273017 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.273024 | orchestrator | 2026-04-07 02:31:00.273031 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-07 02:31:00.273038 | orchestrator | Tuesday 07 April 2026 02:30:54 +0000 (0:00:00.337) 0:01:17.166 ********* 2026-04-07 02:31:00.273058 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.273068 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.273075 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.273082 | orchestrator | 2026-04-07 02:31:00.273089 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 02:31:00.273101 | orchestrator | Tuesday 07 April 2026 02:30:54 +0000 (0:00:00.326) 0:01:17.492 ********* 2026-04-07 02:31:00.273109 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:31:00.273116 | orchestrator | 2026-04-07 02:31:00.273123 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-07 02:31:00.273129 | orchestrator | Tuesday 07 April 2026 02:30:55 +0000 (0:00:00.800) 0:01:18.293 ********* 2026-04-07 02:31:00.273136 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:00.273143 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:00.273151 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:00.273157 | orchestrator | 2026-04-07 02:31:00.273164 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-07 02:31:00.273171 | orchestrator | Tuesday 07 April 2026 02:30:55 +0000 (0:00:00.470) 0:01:18.763 ********* 2026-04-07 02:31:00.273178 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:00.273185 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:00.273192 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:00.273199 | orchestrator | 2026-04-07 02:31:00.273205 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-07 02:31:00.273212 | orchestrator | Tuesday 07 April 2026 02:30:56 +0000 (0:00:00.448) 0:01:19.212 ********* 2026-04-07 02:31:00.273219 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.273226 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.273233 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.273240 | orchestrator | 2026-04-07 02:31:00.273247 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-07 02:31:00.273255 | orchestrator | Tuesday 07 April 2026 02:30:56 +0000 (0:00:00.369) 0:01:19.582 ********* 2026-04-07 02:31:00.273261 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.273269 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.273277 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.273286 | orchestrator | 2026-04-07 02:31:00.273362 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-07 02:31:00.273375 | orchestrator | Tuesday 07 April 2026 02:30:57 +0000 (0:00:00.571) 0:01:20.153 ********* 2026-04-07 02:31:00.273383 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.273392 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.273401 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.273410 | orchestrator | 2026-04-07 02:31:00.273419 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-07 02:31:00.273428 | orchestrator | Tuesday 07 April 2026 02:30:57 +0000 (0:00:00.411) 0:01:20.565 ********* 2026-04-07 02:31:00.273437 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.273447 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.273455 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.273464 | orchestrator | 2026-04-07 02:31:00.273473 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-07 02:31:00.273483 | orchestrator | Tuesday 07 April 2026 02:30:57 +0000 (0:00:00.344) 0:01:20.909 ********* 2026-04-07 02:31:00.273507 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.273517 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.273526 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.273536 | orchestrator | 2026-04-07 02:31:00.273545 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-07 02:31:00.273554 | orchestrator | Tuesday 07 April 2026 02:30:58 +0000 (0:00:00.331) 0:01:21.241 ********* 2026-04-07 02:31:00.273564 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:00.273574 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:00.273583 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:00.273593 | orchestrator | 2026-04-07 02:31:00.273602 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-07 02:31:00.273612 | orchestrator | Tuesday 07 April 2026 02:30:58 +0000 (0:00:00.575) 0:01:21.817 ********* 2026-04-07 02:31:00.273624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:00.273636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:00.273646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:00.273673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622545 | orchestrator | 2026-04-07 02:31:06.622556 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-07 02:31:06.622566 | orchestrator | Tuesday 07 April 2026 02:31:00 +0000 (0:00:01.416) 0:01:23.233 ********* 2026-04-07 02:31:06.622577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622699 | orchestrator | 2026-04-07 02:31:06.622708 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-07 02:31:06.622717 | orchestrator | Tuesday 07 April 2026 02:31:04 +0000 (0:00:03.837) 0:01:27.070 ********* 2026-04-07 02:31:06.622726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:06.622782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.280049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.280155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.280167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.280175 | orchestrator | 2026-04-07 02:31:30.280184 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 02:31:30.280193 | orchestrator | Tuesday 07 April 2026 02:31:06 +0000 (0:00:02.020) 0:01:29.091 ********* 2026-04-07 02:31:30.280200 | orchestrator | 2026-04-07 02:31:30.280208 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 02:31:30.280215 | orchestrator | Tuesday 07 April 2026 02:31:06 +0000 (0:00:00.109) 0:01:29.200 ********* 2026-04-07 02:31:30.280222 | orchestrator | 2026-04-07 02:31:30.280229 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 02:31:30.280236 | orchestrator | Tuesday 07 April 2026 02:31:06 +0000 (0:00:00.309) 0:01:29.509 ********* 2026-04-07 02:31:30.280243 | orchestrator | 2026-04-07 02:31:30.280250 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-07 02:31:30.280257 | orchestrator | Tuesday 07 April 2026 02:31:06 +0000 (0:00:00.070) 0:01:29.579 ********* 2026-04-07 02:31:30.280265 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:31:30.280273 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:31:30.280280 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:31:30.280287 | orchestrator | 2026-04-07 02:31:30.280295 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-07 02:31:30.280302 | orchestrator | Tuesday 07 April 2026 02:31:09 +0000 (0:00:02.575) 0:01:32.155 ********* 2026-04-07 02:31:30.280309 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:31:30.280340 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:31:30.280349 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:31:30.280357 | orchestrator | 2026-04-07 02:31:30.280364 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-07 02:31:30.280371 | orchestrator | Tuesday 07 April 2026 02:31:16 +0000 (0:00:07.433) 0:01:39.589 ********* 2026-04-07 02:31:30.280378 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:31:30.280386 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:31:30.280393 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:31:30.280402 | orchestrator | 2026-04-07 02:31:30.280414 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-07 02:31:30.280425 | orchestrator | Tuesday 07 April 2026 02:31:23 +0000 (0:00:06.661) 0:01:46.250 ********* 2026-04-07 02:31:30.280436 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:31:30.280447 | orchestrator | 2026-04-07 02:31:30.280458 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-07 02:31:30.280470 | orchestrator | Tuesday 07 April 2026 02:31:23 +0000 (0:00:00.135) 0:01:46.385 ********* 2026-04-07 02:31:30.280481 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:30.280494 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:30.280506 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:30.280518 | orchestrator | 2026-04-07 02:31:30.280531 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-07 02:31:30.280543 | orchestrator | Tuesday 07 April 2026 02:31:24 +0000 (0:00:01.050) 0:01:47.436 ********* 2026-04-07 02:31:30.280554 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:30.280578 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:30.280590 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:31:30.280602 | orchestrator | 2026-04-07 02:31:30.280614 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-07 02:31:30.280625 | orchestrator | Tuesday 07 April 2026 02:31:25 +0000 (0:00:00.657) 0:01:48.094 ********* 2026-04-07 02:31:30.280637 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:30.280650 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:30.280663 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:30.280676 | orchestrator | 2026-04-07 02:31:30.280688 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-07 02:31:30.280716 | orchestrator | Tuesday 07 April 2026 02:31:25 +0000 (0:00:00.794) 0:01:48.888 ********* 2026-04-07 02:31:30.280729 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:31:30.280741 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:31:30.280754 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:31:30.280765 | orchestrator | 2026-04-07 02:31:30.280777 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-07 02:31:30.280790 | orchestrator | Tuesday 07 April 2026 02:31:26 +0000 (0:00:00.667) 0:01:49.556 ********* 2026-04-07 02:31:30.280803 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:30.280815 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:30.280849 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:30.280863 | orchestrator | 2026-04-07 02:31:30.280875 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-07 02:31:30.280888 | orchestrator | Tuesday 07 April 2026 02:31:27 +0000 (0:00:01.205) 0:01:50.762 ********* 2026-04-07 02:31:30.280900 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:30.280912 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:30.280924 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:30.280935 | orchestrator | 2026-04-07 02:31:30.280948 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-07 02:31:30.280960 | orchestrator | Tuesday 07 April 2026 02:31:28 +0000 (0:00:00.742) 0:01:51.505 ********* 2026-04-07 02:31:30.280973 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:31:30.280986 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:31:30.280997 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:31:30.281009 | orchestrator | 2026-04-07 02:31:30.281021 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-07 02:31:30.281033 | orchestrator | Tuesday 07 April 2026 02:31:28 +0000 (0:00:00.305) 0:01:51.810 ********* 2026-04-07 02:31:30.281048 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.281062 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.281075 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.281088 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.281112 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.281126 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.281138 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.281157 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:30.281182 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668609 | orchestrator | 2026-04-07 02:31:37.668737 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-07 02:31:37.668757 | orchestrator | Tuesday 07 April 2026 02:31:30 +0000 (0:00:01.425) 0:01:53.235 ********* 2026-04-07 02:31:37.668772 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668787 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668799 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668811 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668876 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668925 | orchestrator | 2026-04-07 02:31:37.668937 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-07 02:31:37.668948 | orchestrator | Tuesday 07 April 2026 02:31:34 +0000 (0:00:03.963) 0:01:57.199 ********* 2026-04-07 02:31:37.668977 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.668990 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.669002 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.669013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.669034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.669046 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.669057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.669068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.669085 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 02:31:37.669097 | orchestrator | 2026-04-07 02:31:37.669108 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 02:31:37.669119 | orchestrator | Tuesday 07 April 2026 02:31:37 +0000 (0:00:03.216) 0:02:00.416 ********* 2026-04-07 02:31:37.669131 | orchestrator | 2026-04-07 02:31:37.669143 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 02:31:37.669156 | orchestrator | Tuesday 07 April 2026 02:31:37 +0000 (0:00:00.070) 0:02:00.486 ********* 2026-04-07 02:31:37.669171 | orchestrator | 2026-04-07 02:31:37.669191 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 02:31:37.669219 | orchestrator | Tuesday 07 April 2026 02:31:37 +0000 (0:00:00.066) 0:02:00.553 ********* 2026-04-07 02:31:37.669240 | orchestrator | 2026-04-07 02:31:37.669269 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-07 02:32:02.241725 | orchestrator | Tuesday 07 April 2026 02:31:37 +0000 (0:00:00.067) 0:02:00.620 ********* 2026-04-07 02:32:02.241808 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:32:02.241818 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:32:02.241824 | orchestrator | 2026-04-07 02:32:02.241830 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-07 02:32:02.241835 | orchestrator | Tuesday 07 April 2026 02:31:43 +0000 (0:00:06.196) 0:02:06.817 ********* 2026-04-07 02:32:02.241840 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:32:02.241845 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:32:02.241850 | orchestrator | 2026-04-07 02:32:02.241855 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-07 02:32:02.241886 | orchestrator | Tuesday 07 April 2026 02:31:50 +0000 (0:00:06.255) 0:02:13.073 ********* 2026-04-07 02:32:02.241891 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:32:02.241896 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:32:02.241901 | orchestrator | 2026-04-07 02:32:02.241905 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-07 02:32:02.241910 | orchestrator | Tuesday 07 April 2026 02:31:56 +0000 (0:00:06.300) 0:02:19.374 ********* 2026-04-07 02:32:02.241915 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:32:02.241919 | orchestrator | 2026-04-07 02:32:02.241924 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-07 02:32:02.241929 | orchestrator | Tuesday 07 April 2026 02:31:56 +0000 (0:00:00.149) 0:02:19.523 ********* 2026-04-07 02:32:02.241933 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:32:02.241939 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:32:02.241943 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:32:02.241948 | orchestrator | 2026-04-07 02:32:02.241953 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-07 02:32:02.241957 | orchestrator | Tuesday 07 April 2026 02:31:57 +0000 (0:00:01.075) 0:02:20.599 ********* 2026-04-07 02:32:02.241962 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:32:02.241967 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:32:02.241971 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:32:02.241976 | orchestrator | 2026-04-07 02:32:02.241980 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-07 02:32:02.241985 | orchestrator | Tuesday 07 April 2026 02:31:58 +0000 (0:00:00.767) 0:02:21.366 ********* 2026-04-07 02:32:02.241990 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:32:02.241995 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:32:02.241999 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:32:02.242004 | orchestrator | 2026-04-07 02:32:02.242008 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-07 02:32:02.242047 | orchestrator | Tuesday 07 April 2026 02:31:59 +0000 (0:00:00.773) 0:02:22.140 ********* 2026-04-07 02:32:02.242052 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:32:02.242057 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:32:02.242061 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:32:02.242066 | orchestrator | 2026-04-07 02:32:02.242071 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-07 02:32:02.242079 | orchestrator | Tuesday 07 April 2026 02:31:59 +0000 (0:00:00.686) 0:02:22.826 ********* 2026-04-07 02:32:02.242087 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:32:02.242095 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:32:02.242107 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:32:02.242115 | orchestrator | 2026-04-07 02:32:02.242123 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-07 02:32:02.242131 | orchestrator | Tuesday 07 April 2026 02:32:00 +0000 (0:00:01.042) 0:02:23.868 ********* 2026-04-07 02:32:02.242139 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:32:02.242147 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:32:02.242154 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:32:02.242162 | orchestrator | 2026-04-07 02:32:02.242169 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:32:02.242179 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-07 02:32:02.242210 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-07 02:32:02.242218 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-07 02:32:02.242226 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:32:02.242243 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:32:02.242251 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:32:02.242259 | orchestrator | 2026-04-07 02:32:02.242266 | orchestrator | 2026-04-07 02:32:02.242287 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:32:02.242295 | orchestrator | Tuesday 07 April 2026 02:32:01 +0000 (0:00:00.899) 0:02:24.768 ********* 2026-04-07 02:32:02.242302 | orchestrator | =============================================================================== 2026-04-07 02:32:02.242310 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.01s 2026-04-07 02:32:02.242318 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.86s 2026-04-07 02:32:02.242326 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.69s 2026-04-07 02:32:02.242334 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 12.96s 2026-04-07 02:32:02.242388 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.77s 2026-04-07 02:32:02.242415 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.96s 2026-04-07 02:32:02.242422 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.84s 2026-04-07 02:32:02.242429 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.22s 2026-04-07 02:32:02.242437 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.47s 2026-04-07 02:32:02.242444 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.02s 2026-04-07 02:32:02.242452 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.65s 2026-04-07 02:32:02.242458 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.51s 2026-04-07 02:32:02.242466 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.51s 2026-04-07 02:32:02.242473 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-04-07 02:32:02.242481 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2026-04-07 02:32:02.242488 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.37s 2026-04-07 02:32:02.242495 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.23s 2026-04-07 02:32:02.242502 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.21s 2026-04-07 02:32:02.242510 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.16s 2026-04-07 02:32:02.242516 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.16s 2026-04-07 02:32:02.610309 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-07 02:32:02.610450 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-04-07 02:32:04.891532 | orchestrator | 2026-04-07 02:32:04 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-07 02:32:15.115810 | orchestrator | 2026-04-07 02:32:15 | INFO  | Task 8f350c17-469c-4aad-bb35-95714d151ff8 (wipe-partitions) was prepared for execution. 2026-04-07 02:32:15.115926 | orchestrator | 2026-04-07 02:32:15 | INFO  | It takes a moment until task 8f350c17-469c-4aad-bb35-95714d151ff8 (wipe-partitions) has been started and output is visible here. 2026-04-07 02:32:28.571520 | orchestrator | 2026-04-07 02:32:28.571668 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-07 02:32:28.571699 | orchestrator | 2026-04-07 02:32:28.571719 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-07 02:32:28.571738 | orchestrator | Tuesday 07 April 2026 02:32:19 +0000 (0:00:00.147) 0:00:00.147 ********* 2026-04-07 02:32:28.571792 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:32:28.571814 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:32:28.571826 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:32:28.571837 | orchestrator | 2026-04-07 02:32:28.571849 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-07 02:32:28.571860 | orchestrator | Tuesday 07 April 2026 02:32:20 +0000 (0:00:00.630) 0:00:00.778 ********* 2026-04-07 02:32:28.571871 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:32:28.571882 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:32:28.571892 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:32:28.571903 | orchestrator | 2026-04-07 02:32:28.571914 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-07 02:32:28.571925 | orchestrator | Tuesday 07 April 2026 02:32:20 +0000 (0:00:00.455) 0:00:01.234 ********* 2026-04-07 02:32:28.571936 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:32:28.571948 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:32:28.571958 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:32:28.571969 | orchestrator | 2026-04-07 02:32:28.571980 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-07 02:32:28.571991 | orchestrator | Tuesday 07 April 2026 02:32:21 +0000 (0:00:00.595) 0:00:01.829 ********* 2026-04-07 02:32:28.572002 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:32:28.572016 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:32:28.572029 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:32:28.572041 | orchestrator | 2026-04-07 02:32:28.572054 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-07 02:32:28.572067 | orchestrator | Tuesday 07 April 2026 02:32:21 +0000 (0:00:00.279) 0:00:02.108 ********* 2026-04-07 02:32:28.572079 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-07 02:32:28.572093 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-07 02:32:28.572106 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-07 02:32:28.572119 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-07 02:32:28.572131 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-07 02:32:28.572144 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-07 02:32:28.572171 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-07 02:32:28.572184 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-07 02:32:28.572196 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-07 02:32:28.572208 | orchestrator | 2026-04-07 02:32:28.572221 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-07 02:32:28.572234 | orchestrator | Tuesday 07 April 2026 02:32:22 +0000 (0:00:01.405) 0:00:03.514 ********* 2026-04-07 02:32:28.572247 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-07 02:32:28.572259 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-07 02:32:28.572271 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-07 02:32:28.572284 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-07 02:32:28.572296 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-07 02:32:28.572308 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-07 02:32:28.572320 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-07 02:32:28.572332 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-07 02:32:28.572344 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-07 02:32:28.572388 | orchestrator | 2026-04-07 02:32:28.572401 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-07 02:32:28.572412 | orchestrator | Tuesday 07 April 2026 02:32:24 +0000 (0:00:01.693) 0:00:05.208 ********* 2026-04-07 02:32:28.572424 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-07 02:32:28.572442 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-07 02:32:28.572457 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-07 02:32:28.572477 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-07 02:32:28.572517 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-07 02:32:28.572534 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-07 02:32:28.572551 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-07 02:32:28.572567 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-07 02:32:28.572583 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-07 02:32:28.572600 | orchestrator | 2026-04-07 02:32:28.572619 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-07 02:32:28.572636 | orchestrator | Tuesday 07 April 2026 02:32:26 +0000 (0:00:02.205) 0:00:07.414 ********* 2026-04-07 02:32:28.572653 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:32:28.572671 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:32:28.572689 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:32:28.572706 | orchestrator | 2026-04-07 02:32:28.572724 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-07 02:32:28.572743 | orchestrator | Tuesday 07 April 2026 02:32:27 +0000 (0:00:00.657) 0:00:08.072 ********* 2026-04-07 02:32:28.572762 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:32:28.572780 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:32:28.572799 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:32:28.572827 | orchestrator | 2026-04-07 02:32:28.572846 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:32:28.572865 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:28.572884 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:28.572929 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:28.572949 | orchestrator | 2026-04-07 02:32:28.572967 | orchestrator | 2026-04-07 02:32:28.572984 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:32:28.573003 | orchestrator | Tuesday 07 April 2026 02:32:28 +0000 (0:00:00.674) 0:00:08.746 ********* 2026-04-07 02:32:28.573021 | orchestrator | =============================================================================== 2026-04-07 02:32:28.573040 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.21s 2026-04-07 02:32:28.573058 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.69s 2026-04-07 02:32:28.573076 | orchestrator | Check device availability ----------------------------------------------- 1.41s 2026-04-07 02:32:28.573095 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2026-04-07 02:32:28.573114 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2026-04-07 02:32:28.573133 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.63s 2026-04-07 02:32:28.573152 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-04-07 02:32:28.573169 | orchestrator | Remove all rook related logical devices --------------------------------- 0.46s 2026-04-07 02:32:28.573187 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2026-04-07 02:32:41.328193 | orchestrator | 2026-04-07 02:32:41 | INFO  | Task f9042158-8aaf-41c6-b65c-41c4ef5ee4ff (facts) was prepared for execution. 2026-04-07 02:32:41.394222 | orchestrator | 2026-04-07 02:32:41 | INFO  | It takes a moment until task f9042158-8aaf-41c6-b65c-41c4ef5ee4ff (facts) has been started and output is visible here. 2026-04-07 02:32:54.930942 | orchestrator | 2026-04-07 02:32:54.931055 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-07 02:32:54.931072 | orchestrator | 2026-04-07 02:32:54.931085 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-07 02:32:54.931124 | orchestrator | Tuesday 07 April 2026 02:32:45 +0000 (0:00:00.284) 0:00:00.284 ********* 2026-04-07 02:32:54.931136 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:32:54.931149 | orchestrator | ok: [testbed-manager] 2026-04-07 02:32:54.931160 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:32:54.931171 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:32:54.931181 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:32:54.931192 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:32:54.931203 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:32:54.931214 | orchestrator | 2026-04-07 02:32:54.931225 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-07 02:32:54.931237 | orchestrator | Tuesday 07 April 2026 02:32:47 +0000 (0:00:01.235) 0:00:01.520 ********* 2026-04-07 02:32:54.931248 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:32:54.931260 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:32:54.931271 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:32:54.931282 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:32:54.931293 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:32:54.931303 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:32:54.931314 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:32:54.931325 | orchestrator | 2026-04-07 02:32:54.931336 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 02:32:54.931347 | orchestrator | 2026-04-07 02:32:54.931398 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 02:32:54.931411 | orchestrator | Tuesday 07 April 2026 02:32:48 +0000 (0:00:01.365) 0:00:02.885 ********* 2026-04-07 02:32:54.931422 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:32:54.931433 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:32:54.931445 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:32:54.931459 | orchestrator | ok: [testbed-manager] 2026-04-07 02:32:54.931471 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:32:54.931483 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:32:54.931496 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:32:54.931508 | orchestrator | 2026-04-07 02:32:54.931522 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-07 02:32:54.931534 | orchestrator | 2026-04-07 02:32:54.931546 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-07 02:32:54.931559 | orchestrator | Tuesday 07 April 2026 02:32:53 +0000 (0:00:05.252) 0:00:08.137 ********* 2026-04-07 02:32:54.931571 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:32:54.931584 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:32:54.931597 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:32:54.931610 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:32:54.931622 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:32:54.931635 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:32:54.931647 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:32:54.931661 | orchestrator | 2026-04-07 02:32:54.931672 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:32:54.931684 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:54.931738 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:54.931751 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:54.931762 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:54.931775 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:54.931794 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:54.931826 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:32:54.931845 | orchestrator | 2026-04-07 02:32:54.931863 | orchestrator | 2026-04-07 02:32:54.931880 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:32:54.931898 | orchestrator | Tuesday 07 April 2026 02:32:54 +0000 (0:00:00.614) 0:00:08.752 ********* 2026-04-07 02:32:54.931916 | orchestrator | =============================================================================== 2026-04-07 02:32:54.931933 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.25s 2026-04-07 02:32:54.931950 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-04-07 02:32:54.931968 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-04-07 02:32:54.931987 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-04-07 02:32:57.638801 | orchestrator | 2026-04-07 02:32:57 | INFO  | Task 2b2abbd2-06a4-45a4-8574-743133d03741 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-07 02:32:57.638911 | orchestrator | 2026-04-07 02:32:57 | INFO  | It takes a moment until task 2b2abbd2-06a4-45a4-8574-743133d03741 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-07 02:33:10.898435 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 02:33:10.898542 | orchestrator | 2.16.14 2026-04-07 02:33:10.898557 | orchestrator | 2026-04-07 02:33:10.898568 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-07 02:33:10.898578 | orchestrator | 2026-04-07 02:33:10.898588 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 02:33:10.898595 | orchestrator | Tuesday 07 April 2026 02:33:02 +0000 (0:00:00.371) 0:00:00.371 ********* 2026-04-07 02:33:10.898601 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 02:33:10.898607 | orchestrator | 2026-04-07 02:33:10.898625 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 02:33:10.898630 | orchestrator | Tuesday 07 April 2026 02:33:02 +0000 (0:00:00.281) 0:00:00.652 ********* 2026-04-07 02:33:10.898636 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:33:10.898641 | orchestrator | 2026-04-07 02:33:10.898646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898651 | orchestrator | Tuesday 07 April 2026 02:33:03 +0000 (0:00:00.253) 0:00:00.906 ********* 2026-04-07 02:33:10.898656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-07 02:33:10.898662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-07 02:33:10.898667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-07 02:33:10.898672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-07 02:33:10.898677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-07 02:33:10.898682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-07 02:33:10.898687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-07 02:33:10.898692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-07 02:33:10.898697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-07 02:33:10.898702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-07 02:33:10.898707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-07 02:33:10.898712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-07 02:33:10.898736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-07 02:33:10.898741 | orchestrator | 2026-04-07 02:33:10.898746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898751 | orchestrator | Tuesday 07 April 2026 02:33:03 +0000 (0:00:00.537) 0:00:01.443 ********* 2026-04-07 02:33:10.898756 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.898762 | orchestrator | 2026-04-07 02:33:10.898767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898772 | orchestrator | Tuesday 07 April 2026 02:33:04 +0000 (0:00:00.228) 0:00:01.672 ********* 2026-04-07 02:33:10.898777 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.898782 | orchestrator | 2026-04-07 02:33:10.898787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898792 | orchestrator | Tuesday 07 April 2026 02:33:04 +0000 (0:00:00.244) 0:00:01.917 ********* 2026-04-07 02:33:10.898797 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.898802 | orchestrator | 2026-04-07 02:33:10.898807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898812 | orchestrator | Tuesday 07 April 2026 02:33:04 +0000 (0:00:00.195) 0:00:02.112 ********* 2026-04-07 02:33:10.898817 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.898822 | orchestrator | 2026-04-07 02:33:10.898827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898832 | orchestrator | Tuesday 07 April 2026 02:33:04 +0000 (0:00:00.213) 0:00:02.326 ********* 2026-04-07 02:33:10.898837 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.898842 | orchestrator | 2026-04-07 02:33:10.898847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898852 | orchestrator | Tuesday 07 April 2026 02:33:04 +0000 (0:00:00.272) 0:00:02.599 ********* 2026-04-07 02:33:10.898857 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.898862 | orchestrator | 2026-04-07 02:33:10.898867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898872 | orchestrator | Tuesday 07 April 2026 02:33:05 +0000 (0:00:00.234) 0:00:02.833 ********* 2026-04-07 02:33:10.898877 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.898882 | orchestrator | 2026-04-07 02:33:10.898887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898891 | orchestrator | Tuesday 07 April 2026 02:33:05 +0000 (0:00:00.225) 0:00:03.058 ********* 2026-04-07 02:33:10.898896 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.898901 | orchestrator | 2026-04-07 02:33:10.898906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898911 | orchestrator | Tuesday 07 April 2026 02:33:05 +0000 (0:00:00.228) 0:00:03.287 ********* 2026-04-07 02:33:10.898916 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc) 2026-04-07 02:33:10.898923 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc) 2026-04-07 02:33:10.898928 | orchestrator | 2026-04-07 02:33:10.898933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898951 | orchestrator | Tuesday 07 April 2026 02:33:06 +0000 (0:00:00.446) 0:00:03.734 ********* 2026-04-07 02:33:10.898957 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc) 2026-04-07 02:33:10.898962 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc) 2026-04-07 02:33:10.898967 | orchestrator | 2026-04-07 02:33:10.898972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.898977 | orchestrator | Tuesday 07 April 2026 02:33:06 +0000 (0:00:00.753) 0:00:04.487 ********* 2026-04-07 02:33:10.898985 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539) 2026-04-07 02:33:10.898995 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539) 2026-04-07 02:33:10.899000 | orchestrator | 2026-04-07 02:33:10.899005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.899010 | orchestrator | Tuesday 07 April 2026 02:33:07 +0000 (0:00:00.696) 0:00:05.184 ********* 2026-04-07 02:33:10.899015 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc) 2026-04-07 02:33:10.899020 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc) 2026-04-07 02:33:10.899025 | orchestrator | 2026-04-07 02:33:10.899030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:10.899035 | orchestrator | Tuesday 07 April 2026 02:33:08 +0000 (0:00:00.999) 0:00:06.184 ********* 2026-04-07 02:33:10.899040 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 02:33:10.899045 | orchestrator | 2026-04-07 02:33:10.899050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:10.899055 | orchestrator | Tuesday 07 April 2026 02:33:08 +0000 (0:00:00.372) 0:00:06.556 ********* 2026-04-07 02:33:10.899059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-07 02:33:10.899064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-07 02:33:10.899069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-07 02:33:10.899074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-07 02:33:10.899079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-07 02:33:10.899084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-07 02:33:10.899089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-07 02:33:10.899094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-07 02:33:10.899099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-07 02:33:10.899104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-07 02:33:10.899109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-07 02:33:10.899114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-07 02:33:10.899119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-07 02:33:10.899124 | orchestrator | 2026-04-07 02:33:10.899129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:10.899133 | orchestrator | Tuesday 07 April 2026 02:33:09 +0000 (0:00:00.412) 0:00:06.968 ********* 2026-04-07 02:33:10.899138 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.899144 | orchestrator | 2026-04-07 02:33:10.899148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:10.899153 | orchestrator | Tuesday 07 April 2026 02:33:09 +0000 (0:00:00.218) 0:00:07.187 ********* 2026-04-07 02:33:10.899158 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.899163 | orchestrator | 2026-04-07 02:33:10.899168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:10.899173 | orchestrator | Tuesday 07 April 2026 02:33:09 +0000 (0:00:00.227) 0:00:07.415 ********* 2026-04-07 02:33:10.899178 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.899183 | orchestrator | 2026-04-07 02:33:10.899188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:10.899193 | orchestrator | Tuesday 07 April 2026 02:33:09 +0000 (0:00:00.239) 0:00:07.654 ********* 2026-04-07 02:33:10.899201 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.899206 | orchestrator | 2026-04-07 02:33:10.899212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:10.899216 | orchestrator | Tuesday 07 April 2026 02:33:10 +0000 (0:00:00.217) 0:00:07.872 ********* 2026-04-07 02:33:10.899221 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.899226 | orchestrator | 2026-04-07 02:33:10.899232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:10.899236 | orchestrator | Tuesday 07 April 2026 02:33:10 +0000 (0:00:00.223) 0:00:08.096 ********* 2026-04-07 02:33:10.899241 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.899246 | orchestrator | 2026-04-07 02:33:10.899253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:10.899261 | orchestrator | Tuesday 07 April 2026 02:33:10 +0000 (0:00:00.213) 0:00:08.309 ********* 2026-04-07 02:33:10.899269 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:10.899280 | orchestrator | 2026-04-07 02:33:10.899297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:19.380863 | orchestrator | Tuesday 07 April 2026 02:33:10 +0000 (0:00:00.230) 0:00:08.540 ********* 2026-04-07 02:33:19.380958 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.380967 | orchestrator | 2026-04-07 02:33:19.380973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:19.380979 | orchestrator | Tuesday 07 April 2026 02:33:11 +0000 (0:00:00.242) 0:00:08.783 ********* 2026-04-07 02:33:19.380984 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-07 02:33:19.380990 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-07 02:33:19.381007 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-07 02:33:19.381012 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-07 02:33:19.381017 | orchestrator | 2026-04-07 02:33:19.381022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:19.381027 | orchestrator | Tuesday 07 April 2026 02:33:12 +0000 (0:00:01.205) 0:00:09.989 ********* 2026-04-07 02:33:19.381031 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381036 | orchestrator | 2026-04-07 02:33:19.381041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:19.381046 | orchestrator | Tuesday 07 April 2026 02:33:12 +0000 (0:00:00.214) 0:00:10.203 ********* 2026-04-07 02:33:19.381050 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381055 | orchestrator | 2026-04-07 02:33:19.381059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:19.381064 | orchestrator | Tuesday 07 April 2026 02:33:12 +0000 (0:00:00.219) 0:00:10.422 ********* 2026-04-07 02:33:19.381069 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381073 | orchestrator | 2026-04-07 02:33:19.381078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:19.381082 | orchestrator | Tuesday 07 April 2026 02:33:12 +0000 (0:00:00.228) 0:00:10.650 ********* 2026-04-07 02:33:19.381087 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381091 | orchestrator | 2026-04-07 02:33:19.381096 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-07 02:33:19.381101 | orchestrator | Tuesday 07 April 2026 02:33:13 +0000 (0:00:00.242) 0:00:10.893 ********* 2026-04-07 02:33:19.381106 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-07 02:33:19.381110 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-07 02:33:19.381115 | orchestrator | 2026-04-07 02:33:19.381119 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-07 02:33:19.381124 | orchestrator | Tuesday 07 April 2026 02:33:13 +0000 (0:00:00.204) 0:00:11.098 ********* 2026-04-07 02:33:19.381128 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381133 | orchestrator | 2026-04-07 02:33:19.381137 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-07 02:33:19.381142 | orchestrator | Tuesday 07 April 2026 02:33:13 +0000 (0:00:00.152) 0:00:11.250 ********* 2026-04-07 02:33:19.381164 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381169 | orchestrator | 2026-04-07 02:33:19.381174 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-07 02:33:19.381178 | orchestrator | Tuesday 07 April 2026 02:33:13 +0000 (0:00:00.175) 0:00:11.426 ********* 2026-04-07 02:33:19.381183 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381187 | orchestrator | 2026-04-07 02:33:19.381192 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-07 02:33:19.381196 | orchestrator | Tuesday 07 April 2026 02:33:13 +0000 (0:00:00.160) 0:00:11.586 ********* 2026-04-07 02:33:19.381201 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:33:19.381205 | orchestrator | 2026-04-07 02:33:19.381210 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-07 02:33:19.381215 | orchestrator | Tuesday 07 April 2026 02:33:14 +0000 (0:00:00.149) 0:00:11.736 ********* 2026-04-07 02:33:19.381220 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44abcd21-31e3-595d-ad07-7c010500a60a'}}) 2026-04-07 02:33:19.381226 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}}) 2026-04-07 02:33:19.381230 | orchestrator | 2026-04-07 02:33:19.381235 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-07 02:33:19.381239 | orchestrator | Tuesday 07 April 2026 02:33:14 +0000 (0:00:00.182) 0:00:11.918 ********* 2026-04-07 02:33:19.381244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44abcd21-31e3-595d-ad07-7c010500a60a'}})  2026-04-07 02:33:19.381251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}})  2026-04-07 02:33:19.381255 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381260 | orchestrator | 2026-04-07 02:33:19.381265 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-07 02:33:19.381269 | orchestrator | Tuesday 07 April 2026 02:33:14 +0000 (0:00:00.378) 0:00:12.297 ********* 2026-04-07 02:33:19.381274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44abcd21-31e3-595d-ad07-7c010500a60a'}})  2026-04-07 02:33:19.381279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}})  2026-04-07 02:33:19.381283 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381288 | orchestrator | 2026-04-07 02:33:19.381292 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-07 02:33:19.381297 | orchestrator | Tuesday 07 April 2026 02:33:14 +0000 (0:00:00.181) 0:00:12.478 ********* 2026-04-07 02:33:19.381301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44abcd21-31e3-595d-ad07-7c010500a60a'}})  2026-04-07 02:33:19.381317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}})  2026-04-07 02:33:19.381322 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381327 | orchestrator | 2026-04-07 02:33:19.381332 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-07 02:33:19.381336 | orchestrator | Tuesday 07 April 2026 02:33:15 +0000 (0:00:00.184) 0:00:12.663 ********* 2026-04-07 02:33:19.381341 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:33:19.381345 | orchestrator | 2026-04-07 02:33:19.381350 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-07 02:33:19.381395 | orchestrator | Tuesday 07 April 2026 02:33:15 +0000 (0:00:00.160) 0:00:12.823 ********* 2026-04-07 02:33:19.381402 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:33:19.381407 | orchestrator | 2026-04-07 02:33:19.381411 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-07 02:33:19.381416 | orchestrator | Tuesday 07 April 2026 02:33:15 +0000 (0:00:00.159) 0:00:12.982 ********* 2026-04-07 02:33:19.381425 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381430 | orchestrator | 2026-04-07 02:33:19.381434 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-07 02:33:19.381439 | orchestrator | Tuesday 07 April 2026 02:33:15 +0000 (0:00:00.131) 0:00:13.114 ********* 2026-04-07 02:33:19.381443 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381448 | orchestrator | 2026-04-07 02:33:19.381452 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-07 02:33:19.381457 | orchestrator | Tuesday 07 April 2026 02:33:15 +0000 (0:00:00.145) 0:00:13.260 ********* 2026-04-07 02:33:19.381461 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381466 | orchestrator | 2026-04-07 02:33:19.381471 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-07 02:33:19.381475 | orchestrator | Tuesday 07 April 2026 02:33:15 +0000 (0:00:00.160) 0:00:13.420 ********* 2026-04-07 02:33:19.381480 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 02:33:19.381484 | orchestrator |  "ceph_osd_devices": { 2026-04-07 02:33:19.381489 | orchestrator |  "sdb": { 2026-04-07 02:33:19.381494 | orchestrator |  "osd_lvm_uuid": "44abcd21-31e3-595d-ad07-7c010500a60a" 2026-04-07 02:33:19.381499 | orchestrator |  }, 2026-04-07 02:33:19.381504 | orchestrator |  "sdc": { 2026-04-07 02:33:19.381508 | orchestrator |  "osd_lvm_uuid": "116f5715-f5f6-56e4-87eb-3f2be33e5f2a" 2026-04-07 02:33:19.381513 | orchestrator |  } 2026-04-07 02:33:19.381518 | orchestrator |  } 2026-04-07 02:33:19.381523 | orchestrator | } 2026-04-07 02:33:19.381528 | orchestrator | 2026-04-07 02:33:19.381532 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-07 02:33:19.381537 | orchestrator | Tuesday 07 April 2026 02:33:15 +0000 (0:00:00.164) 0:00:13.584 ********* 2026-04-07 02:33:19.381541 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381546 | orchestrator | 2026-04-07 02:33:19.381550 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-07 02:33:19.381555 | orchestrator | Tuesday 07 April 2026 02:33:16 +0000 (0:00:00.146) 0:00:13.731 ********* 2026-04-07 02:33:19.381559 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381564 | orchestrator | 2026-04-07 02:33:19.381568 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-07 02:33:19.381573 | orchestrator | Tuesday 07 April 2026 02:33:16 +0000 (0:00:00.132) 0:00:13.863 ********* 2026-04-07 02:33:19.381578 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:33:19.381582 | orchestrator | 2026-04-07 02:33:19.381587 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-07 02:33:19.381591 | orchestrator | Tuesday 07 April 2026 02:33:16 +0000 (0:00:00.158) 0:00:14.021 ********* 2026-04-07 02:33:19.381596 | orchestrator | changed: [testbed-node-3] => { 2026-04-07 02:33:19.381601 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-07 02:33:19.381605 | orchestrator |  "ceph_osd_devices": { 2026-04-07 02:33:19.381610 | orchestrator |  "sdb": { 2026-04-07 02:33:19.381614 | orchestrator |  "osd_lvm_uuid": "44abcd21-31e3-595d-ad07-7c010500a60a" 2026-04-07 02:33:19.381619 | orchestrator |  }, 2026-04-07 02:33:19.381624 | orchestrator |  "sdc": { 2026-04-07 02:33:19.381629 | orchestrator |  "osd_lvm_uuid": "116f5715-f5f6-56e4-87eb-3f2be33e5f2a" 2026-04-07 02:33:19.381633 | orchestrator |  } 2026-04-07 02:33:19.381638 | orchestrator |  }, 2026-04-07 02:33:19.381642 | orchestrator |  "lvm_volumes": [ 2026-04-07 02:33:19.381647 | orchestrator |  { 2026-04-07 02:33:19.381652 | orchestrator |  "data": "osd-block-44abcd21-31e3-595d-ad07-7c010500a60a", 2026-04-07 02:33:19.381656 | orchestrator |  "data_vg": "ceph-44abcd21-31e3-595d-ad07-7c010500a60a" 2026-04-07 02:33:19.381661 | orchestrator |  }, 2026-04-07 02:33:19.381666 | orchestrator |  { 2026-04-07 02:33:19.381670 | orchestrator |  "data": "osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a", 2026-04-07 02:33:19.381679 | orchestrator |  "data_vg": "ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a" 2026-04-07 02:33:19.381683 | orchestrator |  } 2026-04-07 02:33:19.381688 | orchestrator |  ] 2026-04-07 02:33:19.381692 | orchestrator |  } 2026-04-07 02:33:19.381697 | orchestrator | } 2026-04-07 02:33:19.381702 | orchestrator | 2026-04-07 02:33:19.381706 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-07 02:33:19.381711 | orchestrator | Tuesday 07 April 2026 02:33:16 +0000 (0:00:00.440) 0:00:14.462 ********* 2026-04-07 02:33:19.381715 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 02:33:19.381720 | orchestrator | 2026-04-07 02:33:19.381724 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-07 02:33:19.381729 | orchestrator | 2026-04-07 02:33:19.381733 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 02:33:19.381738 | orchestrator | Tuesday 07 April 2026 02:33:18 +0000 (0:00:01.999) 0:00:16.462 ********* 2026-04-07 02:33:19.381742 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-07 02:33:19.381747 | orchestrator | 2026-04-07 02:33:19.381752 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 02:33:19.381756 | orchestrator | Tuesday 07 April 2026 02:33:19 +0000 (0:00:00.292) 0:00:16.755 ********* 2026-04-07 02:33:19.381761 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:33:19.381765 | orchestrator | 2026-04-07 02:33:19.381773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.640663 | orchestrator | Tuesday 07 April 2026 02:33:19 +0000 (0:00:00.272) 0:00:17.027 ********* 2026-04-07 02:33:29.640758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-07 02:33:29.640769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-07 02:33:29.640776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-07 02:33:29.640796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-07 02:33:29.640803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-07 02:33:29.640810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-07 02:33:29.640817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-07 02:33:29.640824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-07 02:33:29.640831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-07 02:33:29.640838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-07 02:33:29.640844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-07 02:33:29.640851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-07 02:33:29.640858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-07 02:33:29.640865 | orchestrator | 2026-04-07 02:33:29.640873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.640879 | orchestrator | Tuesday 07 April 2026 02:33:19 +0000 (0:00:00.425) 0:00:17.453 ********* 2026-04-07 02:33:29.640886 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.640906 | orchestrator | 2026-04-07 02:33:29.640913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.640920 | orchestrator | Tuesday 07 April 2026 02:33:20 +0000 (0:00:00.249) 0:00:17.702 ********* 2026-04-07 02:33:29.640927 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.640934 | orchestrator | 2026-04-07 02:33:29.640948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.640954 | orchestrator | Tuesday 07 April 2026 02:33:20 +0000 (0:00:00.198) 0:00:17.901 ********* 2026-04-07 02:33:29.640978 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.640986 | orchestrator | 2026-04-07 02:33:29.640992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.640999 | orchestrator | Tuesday 07 April 2026 02:33:20 +0000 (0:00:00.233) 0:00:18.134 ********* 2026-04-07 02:33:29.641006 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641012 | orchestrator | 2026-04-07 02:33:29.641019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641025 | orchestrator | Tuesday 07 April 2026 02:33:21 +0000 (0:00:00.678) 0:00:18.813 ********* 2026-04-07 02:33:29.641032 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641039 | orchestrator | 2026-04-07 02:33:29.641045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641052 | orchestrator | Tuesday 07 April 2026 02:33:21 +0000 (0:00:00.249) 0:00:19.062 ********* 2026-04-07 02:33:29.641058 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641065 | orchestrator | 2026-04-07 02:33:29.641071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641078 | orchestrator | Tuesday 07 April 2026 02:33:21 +0000 (0:00:00.224) 0:00:19.287 ********* 2026-04-07 02:33:29.641084 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641091 | orchestrator | 2026-04-07 02:33:29.641097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641104 | orchestrator | Tuesday 07 April 2026 02:33:21 +0000 (0:00:00.260) 0:00:19.547 ********* 2026-04-07 02:33:29.641110 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641117 | orchestrator | 2026-04-07 02:33:29.641124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641130 | orchestrator | Tuesday 07 April 2026 02:33:22 +0000 (0:00:00.257) 0:00:19.805 ********* 2026-04-07 02:33:29.641137 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945) 2026-04-07 02:33:29.641145 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945) 2026-04-07 02:33:29.641152 | orchestrator | 2026-04-07 02:33:29.641158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641165 | orchestrator | Tuesday 07 April 2026 02:33:22 +0000 (0:00:00.506) 0:00:20.311 ********* 2026-04-07 02:33:29.641172 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c) 2026-04-07 02:33:29.641178 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c) 2026-04-07 02:33:29.641185 | orchestrator | 2026-04-07 02:33:29.641192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641198 | orchestrator | Tuesday 07 April 2026 02:33:23 +0000 (0:00:00.461) 0:00:20.773 ********* 2026-04-07 02:33:29.641205 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f) 2026-04-07 02:33:29.641211 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f) 2026-04-07 02:33:29.641219 | orchestrator | 2026-04-07 02:33:29.641227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641247 | orchestrator | Tuesday 07 April 2026 02:33:23 +0000 (0:00:00.519) 0:00:21.293 ********* 2026-04-07 02:33:29.641255 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc) 2026-04-07 02:33:29.641263 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc) 2026-04-07 02:33:29.641270 | orchestrator | 2026-04-07 02:33:29.641278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:29.641290 | orchestrator | Tuesday 07 April 2026 02:33:24 +0000 (0:00:00.747) 0:00:22.040 ********* 2026-04-07 02:33:29.641297 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 02:33:29.641309 | orchestrator | 2026-04-07 02:33:29.641316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641322 | orchestrator | Tuesday 07 April 2026 02:33:25 +0000 (0:00:00.663) 0:00:22.703 ********* 2026-04-07 02:33:29.641329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-07 02:33:29.641335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-07 02:33:29.641350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-07 02:33:29.641356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-07 02:33:29.641382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-07 02:33:29.641389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-07 02:33:29.641396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-07 02:33:29.641402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-07 02:33:29.641409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-07 02:33:29.641415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-07 02:33:29.641423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-07 02:33:29.641429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-07 02:33:29.641445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-07 02:33:29.641452 | orchestrator | 2026-04-07 02:33:29.641466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641473 | orchestrator | Tuesday 07 April 2026 02:33:26 +0000 (0:00:00.955) 0:00:23.658 ********* 2026-04-07 02:33:29.641479 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641486 | orchestrator | 2026-04-07 02:33:29.641492 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641499 | orchestrator | Tuesday 07 April 2026 02:33:26 +0000 (0:00:00.207) 0:00:23.866 ********* 2026-04-07 02:33:29.641505 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641512 | orchestrator | 2026-04-07 02:33:29.641519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641525 | orchestrator | Tuesday 07 April 2026 02:33:26 +0000 (0:00:00.231) 0:00:24.098 ********* 2026-04-07 02:33:29.641532 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641538 | orchestrator | 2026-04-07 02:33:29.641545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641551 | orchestrator | Tuesday 07 April 2026 02:33:26 +0000 (0:00:00.223) 0:00:24.321 ********* 2026-04-07 02:33:29.641558 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641564 | orchestrator | 2026-04-07 02:33:29.641571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641577 | orchestrator | Tuesday 07 April 2026 02:33:26 +0000 (0:00:00.241) 0:00:24.563 ********* 2026-04-07 02:33:29.641584 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641590 | orchestrator | 2026-04-07 02:33:29.641597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641604 | orchestrator | Tuesday 07 April 2026 02:33:27 +0000 (0:00:00.237) 0:00:24.800 ********* 2026-04-07 02:33:29.641610 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641617 | orchestrator | 2026-04-07 02:33:29.641623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641630 | orchestrator | Tuesday 07 April 2026 02:33:27 +0000 (0:00:00.219) 0:00:25.020 ********* 2026-04-07 02:33:29.641642 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641649 | orchestrator | 2026-04-07 02:33:29.641655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641662 | orchestrator | Tuesday 07 April 2026 02:33:27 +0000 (0:00:00.232) 0:00:25.252 ********* 2026-04-07 02:33:29.641668 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:29.641675 | orchestrator | 2026-04-07 02:33:29.641681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641687 | orchestrator | Tuesday 07 April 2026 02:33:27 +0000 (0:00:00.240) 0:00:25.492 ********* 2026-04-07 02:33:29.641698 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-07 02:33:29.641710 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-07 02:33:29.641722 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-07 02:33:29.641734 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-07 02:33:29.641746 | orchestrator | 2026-04-07 02:33:29.641759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:29.641772 | orchestrator | Tuesday 07 April 2026 02:33:28 +0000 (0:00:01.018) 0:00:26.511 ********* 2026-04-07 02:33:29.641779 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.200876 | orchestrator | 2026-04-07 02:33:36.200985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:36.201002 | orchestrator | Tuesday 07 April 2026 02:33:29 +0000 (0:00:00.773) 0:00:27.284 ********* 2026-04-07 02:33:36.201014 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201026 | orchestrator | 2026-04-07 02:33:36.201038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:36.201049 | orchestrator | Tuesday 07 April 2026 02:33:29 +0000 (0:00:00.227) 0:00:27.512 ********* 2026-04-07 02:33:36.201076 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201088 | orchestrator | 2026-04-07 02:33:36.201099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:36.201110 | orchestrator | Tuesday 07 April 2026 02:33:30 +0000 (0:00:00.285) 0:00:27.797 ********* 2026-04-07 02:33:36.201121 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201131 | orchestrator | 2026-04-07 02:33:36.201142 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-07 02:33:36.201153 | orchestrator | Tuesday 07 April 2026 02:33:30 +0000 (0:00:00.238) 0:00:28.036 ********* 2026-04-07 02:33:36.201164 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-07 02:33:36.201175 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-07 02:33:36.201185 | orchestrator | 2026-04-07 02:33:36.201196 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-07 02:33:36.201207 | orchestrator | Tuesday 07 April 2026 02:33:30 +0000 (0:00:00.187) 0:00:28.223 ********* 2026-04-07 02:33:36.201217 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201228 | orchestrator | 2026-04-07 02:33:36.201239 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-07 02:33:36.201250 | orchestrator | Tuesday 07 April 2026 02:33:30 +0000 (0:00:00.145) 0:00:28.368 ********* 2026-04-07 02:33:36.201261 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201272 | orchestrator | 2026-04-07 02:33:36.201282 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-07 02:33:36.201293 | orchestrator | Tuesday 07 April 2026 02:33:30 +0000 (0:00:00.138) 0:00:28.507 ********* 2026-04-07 02:33:36.201304 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201314 | orchestrator | 2026-04-07 02:33:36.201341 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-07 02:33:36.201353 | orchestrator | Tuesday 07 April 2026 02:33:31 +0000 (0:00:00.156) 0:00:28.663 ********* 2026-04-07 02:33:36.201392 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:33:36.201409 | orchestrator | 2026-04-07 02:33:36.201421 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-07 02:33:36.201434 | orchestrator | Tuesday 07 April 2026 02:33:31 +0000 (0:00:00.153) 0:00:28.817 ********* 2026-04-07 02:33:36.201470 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ccafa0da-42f8-5022-b95e-1902d46c646f'}}) 2026-04-07 02:33:36.201483 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8941099b-00de-50f1-81f7-f26159704c09'}}) 2026-04-07 02:33:36.201496 | orchestrator | 2026-04-07 02:33:36.201509 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-07 02:33:36.201521 | orchestrator | Tuesday 07 April 2026 02:33:31 +0000 (0:00:00.206) 0:00:29.023 ********* 2026-04-07 02:33:36.201535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ccafa0da-42f8-5022-b95e-1902d46c646f'}})  2026-04-07 02:33:36.201549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8941099b-00de-50f1-81f7-f26159704c09'}})  2026-04-07 02:33:36.201562 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201575 | orchestrator | 2026-04-07 02:33:36.201588 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-07 02:33:36.201601 | orchestrator | Tuesday 07 April 2026 02:33:31 +0000 (0:00:00.162) 0:00:29.185 ********* 2026-04-07 02:33:36.201614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ccafa0da-42f8-5022-b95e-1902d46c646f'}})  2026-04-07 02:33:36.201626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8941099b-00de-50f1-81f7-f26159704c09'}})  2026-04-07 02:33:36.201639 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201652 | orchestrator | 2026-04-07 02:33:36.201663 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-07 02:33:36.201674 | orchestrator | Tuesday 07 April 2026 02:33:31 +0000 (0:00:00.432) 0:00:29.618 ********* 2026-04-07 02:33:36.201685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ccafa0da-42f8-5022-b95e-1902d46c646f'}})  2026-04-07 02:33:36.201695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8941099b-00de-50f1-81f7-f26159704c09'}})  2026-04-07 02:33:36.201706 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201717 | orchestrator | 2026-04-07 02:33:36.201728 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-07 02:33:36.201738 | orchestrator | Tuesday 07 April 2026 02:33:32 +0000 (0:00:00.168) 0:00:29.787 ********* 2026-04-07 02:33:36.201749 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:33:36.201760 | orchestrator | 2026-04-07 02:33:36.201771 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-07 02:33:36.201781 | orchestrator | Tuesday 07 April 2026 02:33:32 +0000 (0:00:00.152) 0:00:29.940 ********* 2026-04-07 02:33:36.201792 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:33:36.201802 | orchestrator | 2026-04-07 02:33:36.201813 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-07 02:33:36.201824 | orchestrator | Tuesday 07 April 2026 02:33:32 +0000 (0:00:00.177) 0:00:30.117 ********* 2026-04-07 02:33:36.201851 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201863 | orchestrator | 2026-04-07 02:33:36.201874 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-07 02:33:36.201885 | orchestrator | Tuesday 07 April 2026 02:33:32 +0000 (0:00:00.144) 0:00:30.262 ********* 2026-04-07 02:33:36.201895 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201906 | orchestrator | 2026-04-07 02:33:36.201917 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-07 02:33:36.201928 | orchestrator | Tuesday 07 April 2026 02:33:32 +0000 (0:00:00.153) 0:00:30.416 ********* 2026-04-07 02:33:36.201944 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.201955 | orchestrator | 2026-04-07 02:33:36.201966 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-07 02:33:36.201977 | orchestrator | Tuesday 07 April 2026 02:33:32 +0000 (0:00:00.144) 0:00:30.560 ********* 2026-04-07 02:33:36.201996 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 02:33:36.202007 | orchestrator |  "ceph_osd_devices": { 2026-04-07 02:33:36.202083 | orchestrator |  "sdb": { 2026-04-07 02:33:36.202097 | orchestrator |  "osd_lvm_uuid": "ccafa0da-42f8-5022-b95e-1902d46c646f" 2026-04-07 02:33:36.202108 | orchestrator |  }, 2026-04-07 02:33:36.202119 | orchestrator |  "sdc": { 2026-04-07 02:33:36.202131 | orchestrator |  "osd_lvm_uuid": "8941099b-00de-50f1-81f7-f26159704c09" 2026-04-07 02:33:36.202141 | orchestrator |  } 2026-04-07 02:33:36.202152 | orchestrator |  } 2026-04-07 02:33:36.202163 | orchestrator | } 2026-04-07 02:33:36.202174 | orchestrator | 2026-04-07 02:33:36.202186 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-07 02:33:36.202205 | orchestrator | Tuesday 07 April 2026 02:33:33 +0000 (0:00:00.183) 0:00:30.744 ********* 2026-04-07 02:33:36.202223 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.202240 | orchestrator | 2026-04-07 02:33:36.202257 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-07 02:33:36.202275 | orchestrator | Tuesday 07 April 2026 02:33:33 +0000 (0:00:00.162) 0:00:30.906 ********* 2026-04-07 02:33:36.202293 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.202312 | orchestrator | 2026-04-07 02:33:36.202324 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-07 02:33:36.202334 | orchestrator | Tuesday 07 April 2026 02:33:33 +0000 (0:00:00.142) 0:00:31.049 ********* 2026-04-07 02:33:36.202345 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:33:36.202356 | orchestrator | 2026-04-07 02:33:36.202427 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-07 02:33:36.202439 | orchestrator | Tuesday 07 April 2026 02:33:33 +0000 (0:00:00.140) 0:00:31.190 ********* 2026-04-07 02:33:36.202449 | orchestrator | changed: [testbed-node-4] => { 2026-04-07 02:33:36.202461 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-07 02:33:36.202472 | orchestrator |  "ceph_osd_devices": { 2026-04-07 02:33:36.202483 | orchestrator |  "sdb": { 2026-04-07 02:33:36.202494 | orchestrator |  "osd_lvm_uuid": "ccafa0da-42f8-5022-b95e-1902d46c646f" 2026-04-07 02:33:36.202505 | orchestrator |  }, 2026-04-07 02:33:36.202516 | orchestrator |  "sdc": { 2026-04-07 02:33:36.202527 | orchestrator |  "osd_lvm_uuid": "8941099b-00de-50f1-81f7-f26159704c09" 2026-04-07 02:33:36.202537 | orchestrator |  } 2026-04-07 02:33:36.202548 | orchestrator |  }, 2026-04-07 02:33:36.202559 | orchestrator |  "lvm_volumes": [ 2026-04-07 02:33:36.202570 | orchestrator |  { 2026-04-07 02:33:36.202581 | orchestrator |  "data": "osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f", 2026-04-07 02:33:36.202591 | orchestrator |  "data_vg": "ceph-ccafa0da-42f8-5022-b95e-1902d46c646f" 2026-04-07 02:33:36.202602 | orchestrator |  }, 2026-04-07 02:33:36.202613 | orchestrator |  { 2026-04-07 02:33:36.202624 | orchestrator |  "data": "osd-block-8941099b-00de-50f1-81f7-f26159704c09", 2026-04-07 02:33:36.202634 | orchestrator |  "data_vg": "ceph-8941099b-00de-50f1-81f7-f26159704c09" 2026-04-07 02:33:36.202645 | orchestrator |  } 2026-04-07 02:33:36.202656 | orchestrator |  ] 2026-04-07 02:33:36.202666 | orchestrator |  } 2026-04-07 02:33:36.202677 | orchestrator | } 2026-04-07 02:33:36.202688 | orchestrator | 2026-04-07 02:33:36.202699 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-07 02:33:36.202710 | orchestrator | Tuesday 07 April 2026 02:33:34 +0000 (0:00:00.469) 0:00:31.659 ********* 2026-04-07 02:33:36.202721 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-07 02:33:36.202731 | orchestrator | 2026-04-07 02:33:36.202742 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-07 02:33:36.202753 | orchestrator | 2026-04-07 02:33:36.202763 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 02:33:36.202784 | orchestrator | Tuesday 07 April 2026 02:33:35 +0000 (0:00:01.213) 0:00:32.872 ********* 2026-04-07 02:33:36.202795 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-07 02:33:36.202806 | orchestrator | 2026-04-07 02:33:36.202816 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 02:33:36.202827 | orchestrator | Tuesday 07 April 2026 02:33:35 +0000 (0:00:00.296) 0:00:33.169 ********* 2026-04-07 02:33:36.202837 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:33:36.202848 | orchestrator | 2026-04-07 02:33:36.202859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:36.202869 | orchestrator | Tuesday 07 April 2026 02:33:35 +0000 (0:00:00.260) 0:00:33.429 ********* 2026-04-07 02:33:36.202880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-07 02:33:36.202891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-07 02:33:36.202901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-07 02:33:36.202912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-07 02:33:36.202923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-07 02:33:36.202944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-07 02:33:45.828915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-07 02:33:45.828996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-07 02:33:45.829003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-07 02:33:45.829019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-07 02:33:45.829023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-07 02:33:45.829027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-07 02:33:45.829031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-07 02:33:45.829035 | orchestrator | 2026-04-07 02:33:45.829040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829046 | orchestrator | Tuesday 07 April 2026 02:33:36 +0000 (0:00:00.415) 0:00:33.845 ********* 2026-04-07 02:33:45.829050 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829055 | orchestrator | 2026-04-07 02:33:45.829059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829063 | orchestrator | Tuesday 07 April 2026 02:33:36 +0000 (0:00:00.261) 0:00:34.106 ********* 2026-04-07 02:33:45.829067 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829071 | orchestrator | 2026-04-07 02:33:45.829074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829078 | orchestrator | Tuesday 07 April 2026 02:33:36 +0000 (0:00:00.239) 0:00:34.346 ********* 2026-04-07 02:33:45.829082 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829086 | orchestrator | 2026-04-07 02:33:45.829090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829093 | orchestrator | Tuesday 07 April 2026 02:33:36 +0000 (0:00:00.226) 0:00:34.572 ********* 2026-04-07 02:33:45.829097 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829101 | orchestrator | 2026-04-07 02:33:45.829105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829109 | orchestrator | Tuesday 07 April 2026 02:33:37 +0000 (0:00:00.708) 0:00:35.281 ********* 2026-04-07 02:33:45.829113 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829117 | orchestrator | 2026-04-07 02:33:45.829120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829124 | orchestrator | Tuesday 07 April 2026 02:33:37 +0000 (0:00:00.264) 0:00:35.546 ********* 2026-04-07 02:33:45.829143 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829147 | orchestrator | 2026-04-07 02:33:45.829151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829155 | orchestrator | Tuesday 07 April 2026 02:33:38 +0000 (0:00:00.211) 0:00:35.757 ********* 2026-04-07 02:33:45.829158 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829162 | orchestrator | 2026-04-07 02:33:45.829166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829170 | orchestrator | Tuesday 07 April 2026 02:33:38 +0000 (0:00:00.218) 0:00:35.976 ********* 2026-04-07 02:33:45.829173 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829177 | orchestrator | 2026-04-07 02:33:45.829181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829185 | orchestrator | Tuesday 07 April 2026 02:33:38 +0000 (0:00:00.227) 0:00:36.203 ********* 2026-04-07 02:33:45.829188 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2) 2026-04-07 02:33:45.829193 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2) 2026-04-07 02:33:45.829197 | orchestrator | 2026-04-07 02:33:45.829201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829205 | orchestrator | Tuesday 07 April 2026 02:33:39 +0000 (0:00:00.458) 0:00:36.662 ********* 2026-04-07 02:33:45.829209 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7) 2026-04-07 02:33:45.829213 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7) 2026-04-07 02:33:45.829216 | orchestrator | 2026-04-07 02:33:45.829220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829224 | orchestrator | Tuesday 07 April 2026 02:33:39 +0000 (0:00:00.472) 0:00:37.134 ********* 2026-04-07 02:33:45.829228 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d) 2026-04-07 02:33:45.829232 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d) 2026-04-07 02:33:45.829235 | orchestrator | 2026-04-07 02:33:45.829239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829243 | orchestrator | Tuesday 07 April 2026 02:33:39 +0000 (0:00:00.489) 0:00:37.624 ********* 2026-04-07 02:33:45.829247 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599) 2026-04-07 02:33:45.829251 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599) 2026-04-07 02:33:45.829255 | orchestrator | 2026-04-07 02:33:45.829259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:33:45.829263 | orchestrator | Tuesday 07 April 2026 02:33:40 +0000 (0:00:00.468) 0:00:38.092 ********* 2026-04-07 02:33:45.829266 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 02:33:45.829270 | orchestrator | 2026-04-07 02:33:45.829274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829288 | orchestrator | Tuesday 07 April 2026 02:33:40 +0000 (0:00:00.388) 0:00:38.481 ********* 2026-04-07 02:33:45.829292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-07 02:33:45.829296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-07 02:33:45.829300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-07 02:33:45.829306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-07 02:33:45.829310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-07 02:33:45.829314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-07 02:33:45.829321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-07 02:33:45.829325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-07 02:33:45.829328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-07 02:33:45.829332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-07 02:33:45.829336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-07 02:33:45.829340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-07 02:33:45.829344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-07 02:33:45.829347 | orchestrator | 2026-04-07 02:33:45.829351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829355 | orchestrator | Tuesday 07 April 2026 02:33:41 +0000 (0:00:00.727) 0:00:39.208 ********* 2026-04-07 02:33:45.829359 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829363 | orchestrator | 2026-04-07 02:33:45.829403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829407 | orchestrator | Tuesday 07 April 2026 02:33:41 +0000 (0:00:00.206) 0:00:39.414 ********* 2026-04-07 02:33:45.829411 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829415 | orchestrator | 2026-04-07 02:33:45.829419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829422 | orchestrator | Tuesday 07 April 2026 02:33:42 +0000 (0:00:00.257) 0:00:39.672 ********* 2026-04-07 02:33:45.829426 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829430 | orchestrator | 2026-04-07 02:33:45.829434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829438 | orchestrator | Tuesday 07 April 2026 02:33:42 +0000 (0:00:00.214) 0:00:39.887 ********* 2026-04-07 02:33:45.829442 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829446 | orchestrator | 2026-04-07 02:33:45.829450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829453 | orchestrator | Tuesday 07 April 2026 02:33:42 +0000 (0:00:00.262) 0:00:40.149 ********* 2026-04-07 02:33:45.829457 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829461 | orchestrator | 2026-04-07 02:33:45.829465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829469 | orchestrator | Tuesday 07 April 2026 02:33:42 +0000 (0:00:00.216) 0:00:40.366 ********* 2026-04-07 02:33:45.829473 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829476 | orchestrator | 2026-04-07 02:33:45.829480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829484 | orchestrator | Tuesday 07 April 2026 02:33:42 +0000 (0:00:00.221) 0:00:40.588 ********* 2026-04-07 02:33:45.829489 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829493 | orchestrator | 2026-04-07 02:33:45.829497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829502 | orchestrator | Tuesday 07 April 2026 02:33:43 +0000 (0:00:00.234) 0:00:40.822 ********* 2026-04-07 02:33:45.829506 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829510 | orchestrator | 2026-04-07 02:33:45.829515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829519 | orchestrator | Tuesday 07 April 2026 02:33:43 +0000 (0:00:00.208) 0:00:41.030 ********* 2026-04-07 02:33:45.829524 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-07 02:33:45.829528 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-07 02:33:45.829533 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-07 02:33:45.829538 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-07 02:33:45.829542 | orchestrator | 2026-04-07 02:33:45.829550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829555 | orchestrator | Tuesday 07 April 2026 02:33:44 +0000 (0:00:00.999) 0:00:42.030 ********* 2026-04-07 02:33:45.829559 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829564 | orchestrator | 2026-04-07 02:33:45.829568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829572 | orchestrator | Tuesday 07 April 2026 02:33:44 +0000 (0:00:00.223) 0:00:42.254 ********* 2026-04-07 02:33:45.829577 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829581 | orchestrator | 2026-04-07 02:33:45.829585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829589 | orchestrator | Tuesday 07 April 2026 02:33:44 +0000 (0:00:00.227) 0:00:42.481 ********* 2026-04-07 02:33:45.829593 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829598 | orchestrator | 2026-04-07 02:33:45.829602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:33:45.829606 | orchestrator | Tuesday 07 April 2026 02:33:45 +0000 (0:00:00.769) 0:00:43.250 ********* 2026-04-07 02:33:45.829611 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:45.829616 | orchestrator | 2026-04-07 02:33:45.829622 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-07 02:33:50.345519 | orchestrator | Tuesday 07 April 2026 02:33:45 +0000 (0:00:00.221) 0:00:43.472 ********* 2026-04-07 02:33:50.345619 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-07 02:33:50.345633 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-07 02:33:50.345642 | orchestrator | 2026-04-07 02:33:50.345649 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-07 02:33:50.345670 | orchestrator | Tuesday 07 April 2026 02:33:46 +0000 (0:00:00.199) 0:00:43.672 ********* 2026-04-07 02:33:50.345676 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.345683 | orchestrator | 2026-04-07 02:33:50.345689 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-07 02:33:50.345695 | orchestrator | Tuesday 07 April 2026 02:33:46 +0000 (0:00:00.206) 0:00:43.878 ********* 2026-04-07 02:33:50.345701 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.345707 | orchestrator | 2026-04-07 02:33:50.345712 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-07 02:33:50.345718 | orchestrator | Tuesday 07 April 2026 02:33:46 +0000 (0:00:00.140) 0:00:44.018 ********* 2026-04-07 02:33:50.345724 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.345729 | orchestrator | 2026-04-07 02:33:50.345735 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-07 02:33:50.345741 | orchestrator | Tuesday 07 April 2026 02:33:46 +0000 (0:00:00.147) 0:00:44.166 ********* 2026-04-07 02:33:50.345747 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:33:50.345754 | orchestrator | 2026-04-07 02:33:50.345759 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-07 02:33:50.345765 | orchestrator | Tuesday 07 April 2026 02:33:46 +0000 (0:00:00.145) 0:00:44.311 ********* 2026-04-07 02:33:50.345771 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '754aebfc-d76c-537f-941d-8ad36483cdb2'}}) 2026-04-07 02:33:50.345778 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed7b856a-23c6-522d-bad3-e57b6a18196d'}}) 2026-04-07 02:33:50.345783 | orchestrator | 2026-04-07 02:33:50.345789 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-07 02:33:50.345795 | orchestrator | Tuesday 07 April 2026 02:33:46 +0000 (0:00:00.190) 0:00:44.502 ********* 2026-04-07 02:33:50.345801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '754aebfc-d76c-537f-941d-8ad36483cdb2'}})  2026-04-07 02:33:50.345809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed7b856a-23c6-522d-bad3-e57b6a18196d'}})  2026-04-07 02:33:50.345815 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.345836 | orchestrator | 2026-04-07 02:33:50.345842 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-07 02:33:50.345848 | orchestrator | Tuesday 07 April 2026 02:33:47 +0000 (0:00:00.161) 0:00:44.663 ********* 2026-04-07 02:33:50.345854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '754aebfc-d76c-537f-941d-8ad36483cdb2'}})  2026-04-07 02:33:50.345860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed7b856a-23c6-522d-bad3-e57b6a18196d'}})  2026-04-07 02:33:50.345865 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.345871 | orchestrator | 2026-04-07 02:33:50.345879 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-07 02:33:50.345889 | orchestrator | Tuesday 07 April 2026 02:33:47 +0000 (0:00:00.186) 0:00:44.850 ********* 2026-04-07 02:33:50.345899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '754aebfc-d76c-537f-941d-8ad36483cdb2'}})  2026-04-07 02:33:50.345908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed7b856a-23c6-522d-bad3-e57b6a18196d'}})  2026-04-07 02:33:50.345917 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.345927 | orchestrator | 2026-04-07 02:33:50.345936 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-07 02:33:50.345946 | orchestrator | Tuesday 07 April 2026 02:33:47 +0000 (0:00:00.170) 0:00:45.020 ********* 2026-04-07 02:33:50.345955 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:33:50.345965 | orchestrator | 2026-04-07 02:33:50.345975 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-07 02:33:50.345985 | orchestrator | Tuesday 07 April 2026 02:33:47 +0000 (0:00:00.163) 0:00:45.184 ********* 2026-04-07 02:33:50.345995 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:33:50.346007 | orchestrator | 2026-04-07 02:33:50.346065 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-07 02:33:50.346074 | orchestrator | Tuesday 07 April 2026 02:33:47 +0000 (0:00:00.386) 0:00:45.570 ********* 2026-04-07 02:33:50.346084 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.346095 | orchestrator | 2026-04-07 02:33:50.346110 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-07 02:33:50.346120 | orchestrator | Tuesday 07 April 2026 02:33:48 +0000 (0:00:00.158) 0:00:45.728 ********* 2026-04-07 02:33:50.346130 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.346139 | orchestrator | 2026-04-07 02:33:50.346149 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-07 02:33:50.346158 | orchestrator | Tuesday 07 April 2026 02:33:48 +0000 (0:00:00.184) 0:00:45.913 ********* 2026-04-07 02:33:50.346167 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.346177 | orchestrator | 2026-04-07 02:33:50.346187 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-07 02:33:50.346196 | orchestrator | Tuesday 07 April 2026 02:33:48 +0000 (0:00:00.144) 0:00:46.058 ********* 2026-04-07 02:33:50.346207 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 02:33:50.346217 | orchestrator |  "ceph_osd_devices": { 2026-04-07 02:33:50.346227 | orchestrator |  "sdb": { 2026-04-07 02:33:50.346255 | orchestrator |  "osd_lvm_uuid": "754aebfc-d76c-537f-941d-8ad36483cdb2" 2026-04-07 02:33:50.346266 | orchestrator |  }, 2026-04-07 02:33:50.346276 | orchestrator |  "sdc": { 2026-04-07 02:33:50.346286 | orchestrator |  "osd_lvm_uuid": "ed7b856a-23c6-522d-bad3-e57b6a18196d" 2026-04-07 02:33:50.346296 | orchestrator |  } 2026-04-07 02:33:50.346305 | orchestrator |  } 2026-04-07 02:33:50.346315 | orchestrator | } 2026-04-07 02:33:50.346325 | orchestrator | 2026-04-07 02:33:50.346341 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-07 02:33:50.346352 | orchestrator | Tuesday 07 April 2026 02:33:48 +0000 (0:00:00.161) 0:00:46.219 ********* 2026-04-07 02:33:50.346363 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.346408 | orchestrator | 2026-04-07 02:33:50.346418 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-07 02:33:50.346428 | orchestrator | Tuesday 07 April 2026 02:33:48 +0000 (0:00:00.158) 0:00:46.377 ********* 2026-04-07 02:33:50.346438 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.346447 | orchestrator | 2026-04-07 02:33:50.346458 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-07 02:33:50.346465 | orchestrator | Tuesday 07 April 2026 02:33:48 +0000 (0:00:00.147) 0:00:46.525 ********* 2026-04-07 02:33:50.346470 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:33:50.346476 | orchestrator | 2026-04-07 02:33:50.346482 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-07 02:33:50.346488 | orchestrator | Tuesday 07 April 2026 02:33:49 +0000 (0:00:00.146) 0:00:46.671 ********* 2026-04-07 02:33:50.346494 | orchestrator | changed: [testbed-node-5] => { 2026-04-07 02:33:50.346500 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-07 02:33:50.346506 | orchestrator |  "ceph_osd_devices": { 2026-04-07 02:33:50.346512 | orchestrator |  "sdb": { 2026-04-07 02:33:50.346518 | orchestrator |  "osd_lvm_uuid": "754aebfc-d76c-537f-941d-8ad36483cdb2" 2026-04-07 02:33:50.346524 | orchestrator |  }, 2026-04-07 02:33:50.346529 | orchestrator |  "sdc": { 2026-04-07 02:33:50.346535 | orchestrator |  "osd_lvm_uuid": "ed7b856a-23c6-522d-bad3-e57b6a18196d" 2026-04-07 02:33:50.346541 | orchestrator |  } 2026-04-07 02:33:50.346547 | orchestrator |  }, 2026-04-07 02:33:50.346553 | orchestrator |  "lvm_volumes": [ 2026-04-07 02:33:50.346559 | orchestrator |  { 2026-04-07 02:33:50.346565 | orchestrator |  "data": "osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2", 2026-04-07 02:33:50.346571 | orchestrator |  "data_vg": "ceph-754aebfc-d76c-537f-941d-8ad36483cdb2" 2026-04-07 02:33:50.346577 | orchestrator |  }, 2026-04-07 02:33:50.346583 | orchestrator |  { 2026-04-07 02:33:50.346589 | orchestrator |  "data": "osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d", 2026-04-07 02:33:50.346594 | orchestrator |  "data_vg": "ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d" 2026-04-07 02:33:50.346600 | orchestrator |  } 2026-04-07 02:33:50.346606 | orchestrator |  ] 2026-04-07 02:33:50.346612 | orchestrator |  } 2026-04-07 02:33:50.346618 | orchestrator | } 2026-04-07 02:33:50.346624 | orchestrator | 2026-04-07 02:33:50.346629 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-07 02:33:50.346635 | orchestrator | Tuesday 07 April 2026 02:33:49 +0000 (0:00:00.226) 0:00:46.898 ********* 2026-04-07 02:33:50.346641 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-07 02:33:50.346647 | orchestrator | 2026-04-07 02:33:50.346653 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:33:50.346659 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 02:33:50.346666 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 02:33:50.346672 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 02:33:50.346677 | orchestrator | 2026-04-07 02:33:50.346683 | orchestrator | 2026-04-07 02:33:50.346689 | orchestrator | 2026-04-07 02:33:50.346694 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:33:50.346700 | orchestrator | Tuesday 07 April 2026 02:33:50 +0000 (0:00:01.070) 0:00:47.968 ********* 2026-04-07 02:33:50.346706 | orchestrator | =============================================================================== 2026-04-07 02:33:50.346712 | orchestrator | Write configuration file ------------------------------------------------ 4.28s 2026-04-07 02:33:50.346723 | orchestrator | Add known partitions to the list of available block devices ------------- 2.10s 2026-04-07 02:33:50.346729 | orchestrator | Add known links to the list of available block devices ------------------ 1.38s 2026-04-07 02:33:50.346735 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-04-07 02:33:50.346740 | orchestrator | Print configuration data ------------------------------------------------ 1.14s 2026-04-07 02:33:50.346746 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2026-04-07 02:33:50.346752 | orchestrator | Add known links to the list of available block devices ------------------ 1.00s 2026-04-07 02:33:50.346758 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-04-07 02:33:50.346763 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2026-04-07 02:33:50.346769 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.80s 2026-04-07 02:33:50.346775 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-04-07 02:33:50.346780 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-07 02:33:50.346786 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-07 02:33:50.346798 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-04-07 02:33:50.818509 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-04-07 02:33:50.818642 | orchestrator | Set OSD devices config data --------------------------------------------- 0.72s 2026-04-07 02:33:50.818669 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-04-07 02:33:50.818714 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.70s 2026-04-07 02:33:50.818735 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-07 02:33:50.818753 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-04-07 02:34:13.595946 | orchestrator | 2026-04-07 02:34:13 | INFO  | Task 3de2f029-71f1-4bf3-8635-dd7d9bf0ec91 (sync inventory) is running in background. Output coming soon. 2026-04-07 02:34:45.467584 | orchestrator | 2026-04-07 02:34:15 | INFO  | Starting group_vars file reorganization 2026-04-07 02:34:45.467734 | orchestrator | 2026-04-07 02:34:15 | INFO  | Moved 0 file(s) to their respective directories 2026-04-07 02:34:45.467765 | orchestrator | 2026-04-07 02:34:15 | INFO  | Group_vars file reorganization completed 2026-04-07 02:34:45.467785 | orchestrator | 2026-04-07 02:34:18 | INFO  | Starting variable preparation from inventory 2026-04-07 02:34:45.467805 | orchestrator | 2026-04-07 02:34:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-07 02:34:45.467824 | orchestrator | 2026-04-07 02:34:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-07 02:34:45.467843 | orchestrator | 2026-04-07 02:34:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-07 02:34:45.467861 | orchestrator | 2026-04-07 02:34:21 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-07 02:34:45.467879 | orchestrator | 2026-04-07 02:34:21 | INFO  | Variable preparation completed 2026-04-07 02:34:45.467897 | orchestrator | 2026-04-07 02:34:23 | INFO  | Starting inventory overwrite handling 2026-04-07 02:34:45.467914 | orchestrator | 2026-04-07 02:34:23 | INFO  | Handling group overwrites in 99-overwrite 2026-04-07 02:34:45.467934 | orchestrator | 2026-04-07 02:34:23 | INFO  | Removing group frr:children from 60-generic 2026-04-07 02:34:45.467952 | orchestrator | 2026-04-07 02:34:23 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-07 02:34:45.467970 | orchestrator | 2026-04-07 02:34:23 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-07 02:34:45.468016 | orchestrator | 2026-04-07 02:34:23 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-07 02:34:45.468028 | orchestrator | 2026-04-07 02:34:23 | INFO  | Handling group overwrites in 20-roles 2026-04-07 02:34:45.468040 | orchestrator | 2026-04-07 02:34:23 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-07 02:34:45.468052 | orchestrator | 2026-04-07 02:34:23 | INFO  | Removed 5 group(s) in total 2026-04-07 02:34:45.468065 | orchestrator | 2026-04-07 02:34:23 | INFO  | Inventory overwrite handling completed 2026-04-07 02:34:45.468078 | orchestrator | 2026-04-07 02:34:24 | INFO  | Starting merge of inventory files 2026-04-07 02:34:45.468092 | orchestrator | 2026-04-07 02:34:24 | INFO  | Inventory files merged successfully 2026-04-07 02:34:45.468104 | orchestrator | 2026-04-07 02:34:30 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-07 02:34:45.468117 | orchestrator | 2026-04-07 02:34:43 | INFO  | Successfully wrote ClusterShell configuration 2026-04-07 02:34:45.468131 | orchestrator | [master 72b4ad5] 2026-04-07-02-34 2026-04-07 02:34:45.468145 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-04-07 02:34:47.932676 | orchestrator | 2026-04-07 02:34:47 | INFO  | Task 94349fc8-f8c7-4361-b461-157986cde298 (ceph-create-lvm-devices) was prepared for execution. 2026-04-07 02:34:47.932763 | orchestrator | 2026-04-07 02:34:47 | INFO  | It takes a moment until task 94349fc8-f8c7-4361-b461-157986cde298 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-07 02:35:00.902733 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 02:35:00.902825 | orchestrator | 2.16.14 2026-04-07 02:35:00.902840 | orchestrator | 2026-04-07 02:35:00.902850 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-07 02:35:00.902861 | orchestrator | 2026-04-07 02:35:00.902870 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 02:35:00.902879 | orchestrator | Tuesday 07 April 2026 02:34:52 +0000 (0:00:00.323) 0:00:00.323 ********* 2026-04-07 02:35:00.902889 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 02:35:00.902898 | orchestrator | 2026-04-07 02:35:00.902907 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 02:35:00.902916 | orchestrator | Tuesday 07 April 2026 02:34:53 +0000 (0:00:00.291) 0:00:00.615 ********* 2026-04-07 02:35:00.902925 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:00.902934 | orchestrator | 2026-04-07 02:35:00.902943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.902951 | orchestrator | Tuesday 07 April 2026 02:34:53 +0000 (0:00:00.267) 0:00:00.882 ********* 2026-04-07 02:35:00.902960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-07 02:35:00.902990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-07 02:35:00.903007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-07 02:35:00.903021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-07 02:35:00.903049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-07 02:35:00.903065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-07 02:35:00.903091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-07 02:35:00.903107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-07 02:35:00.903122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-07 02:35:00.903138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-07 02:35:00.903174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-07 02:35:00.903184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-07 02:35:00.903193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-07 02:35:00.903202 | orchestrator | 2026-04-07 02:35:00.903210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903219 | orchestrator | Tuesday 07 April 2026 02:34:53 +0000 (0:00:00.564) 0:00:01.447 ********* 2026-04-07 02:35:00.903228 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.903237 | orchestrator | 2026-04-07 02:35:00.903246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903255 | orchestrator | Tuesday 07 April 2026 02:34:54 +0000 (0:00:00.220) 0:00:01.668 ********* 2026-04-07 02:35:00.903264 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.903272 | orchestrator | 2026-04-07 02:35:00.903281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903290 | orchestrator | Tuesday 07 April 2026 02:34:54 +0000 (0:00:00.207) 0:00:01.875 ********* 2026-04-07 02:35:00.903299 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.903308 | orchestrator | 2026-04-07 02:35:00.903316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903325 | orchestrator | Tuesday 07 April 2026 02:34:54 +0000 (0:00:00.213) 0:00:02.088 ********* 2026-04-07 02:35:00.903334 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.903343 | orchestrator | 2026-04-07 02:35:00.903351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903360 | orchestrator | Tuesday 07 April 2026 02:34:54 +0000 (0:00:00.231) 0:00:02.320 ********* 2026-04-07 02:35:00.903369 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.903377 | orchestrator | 2026-04-07 02:35:00.903410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903422 | orchestrator | Tuesday 07 April 2026 02:34:55 +0000 (0:00:00.224) 0:00:02.545 ********* 2026-04-07 02:35:00.903431 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.903439 | orchestrator | 2026-04-07 02:35:00.903448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903457 | orchestrator | Tuesday 07 April 2026 02:34:55 +0000 (0:00:00.218) 0:00:02.763 ********* 2026-04-07 02:35:00.903466 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.903475 | orchestrator | 2026-04-07 02:35:00.903483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903492 | orchestrator | Tuesday 07 April 2026 02:34:55 +0000 (0:00:00.239) 0:00:03.002 ********* 2026-04-07 02:35:00.903501 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.903509 | orchestrator | 2026-04-07 02:35:00.903518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903527 | orchestrator | Tuesday 07 April 2026 02:34:55 +0000 (0:00:00.203) 0:00:03.206 ********* 2026-04-07 02:35:00.903539 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc) 2026-04-07 02:35:00.903555 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc) 2026-04-07 02:35:00.903569 | orchestrator | 2026-04-07 02:35:00.903584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903617 | orchestrator | Tuesday 07 April 2026 02:34:56 +0000 (0:00:00.458) 0:00:03.665 ********* 2026-04-07 02:35:00.903632 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc) 2026-04-07 02:35:00.903647 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc) 2026-04-07 02:35:00.903662 | orchestrator | 2026-04-07 02:35:00.903677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903702 | orchestrator | Tuesday 07 April 2026 02:34:56 +0000 (0:00:00.694) 0:00:04.359 ********* 2026-04-07 02:35:00.903717 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539) 2026-04-07 02:35:00.903732 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539) 2026-04-07 02:35:00.903746 | orchestrator | 2026-04-07 02:35:00.903760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903775 | orchestrator | Tuesday 07 April 2026 02:34:57 +0000 (0:00:00.702) 0:00:05.062 ********* 2026-04-07 02:35:00.903789 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc) 2026-04-07 02:35:00.903812 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc) 2026-04-07 02:35:00.903827 | orchestrator | 2026-04-07 02:35:00.903842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:00.903858 | orchestrator | Tuesday 07 April 2026 02:34:58 +0000 (0:00:00.957) 0:00:06.020 ********* 2026-04-07 02:35:00.903872 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 02:35:00.903885 | orchestrator | 2026-04-07 02:35:00.903899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:00.903913 | orchestrator | Tuesday 07 April 2026 02:34:58 +0000 (0:00:00.361) 0:00:06.381 ********* 2026-04-07 02:35:00.903926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-07 02:35:00.903940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-07 02:35:00.903955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-07 02:35:00.903971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-07 02:35:00.903985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-07 02:35:00.903999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-07 02:35:00.904015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-07 02:35:00.904031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-07 02:35:00.904046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-07 02:35:00.904061 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-07 02:35:00.904076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-07 02:35:00.904091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-07 02:35:00.904104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-07 02:35:00.904118 | orchestrator | 2026-04-07 02:35:00.904133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:00.904147 | orchestrator | Tuesday 07 April 2026 02:34:59 +0000 (0:00:00.450) 0:00:06.832 ********* 2026-04-07 02:35:00.904160 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.904216 | orchestrator | 2026-04-07 02:35:00.904233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:00.904248 | orchestrator | Tuesday 07 April 2026 02:34:59 +0000 (0:00:00.239) 0:00:07.072 ********* 2026-04-07 02:35:00.904263 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.904278 | orchestrator | 2026-04-07 02:35:00.904287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:00.904296 | orchestrator | Tuesday 07 April 2026 02:34:59 +0000 (0:00:00.232) 0:00:07.304 ********* 2026-04-07 02:35:00.904305 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.904323 | orchestrator | 2026-04-07 02:35:00.904332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:00.904341 | orchestrator | Tuesday 07 April 2026 02:35:00 +0000 (0:00:00.233) 0:00:07.538 ********* 2026-04-07 02:35:00.904350 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.904358 | orchestrator | 2026-04-07 02:35:00.904367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:00.904378 | orchestrator | Tuesday 07 April 2026 02:35:00 +0000 (0:00:00.218) 0:00:07.757 ********* 2026-04-07 02:35:00.904425 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.904440 | orchestrator | 2026-04-07 02:35:00.904455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:00.904470 | orchestrator | Tuesday 07 April 2026 02:35:00 +0000 (0:00:00.222) 0:00:07.979 ********* 2026-04-07 02:35:00.904485 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.904500 | orchestrator | 2026-04-07 02:35:00.904515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:00.904529 | orchestrator | Tuesday 07 April 2026 02:35:00 +0000 (0:00:00.219) 0:00:08.199 ********* 2026-04-07 02:35:00.904543 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:00.904557 | orchestrator | 2026-04-07 02:35:00.904588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:09.792940 | orchestrator | Tuesday 07 April 2026 02:35:00 +0000 (0:00:00.221) 0:00:08.420 ********* 2026-04-07 02:35:09.793009 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793028 | orchestrator | 2026-04-07 02:35:09.793036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:09.793044 | orchestrator | Tuesday 07 April 2026 02:35:01 +0000 (0:00:00.773) 0:00:09.194 ********* 2026-04-07 02:35:09.793051 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-07 02:35:09.793059 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-07 02:35:09.793066 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-07 02:35:09.793073 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-07 02:35:09.793080 | orchestrator | 2026-04-07 02:35:09.793087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:09.793094 | orchestrator | Tuesday 07 April 2026 02:35:02 +0000 (0:00:00.713) 0:00:09.908 ********* 2026-04-07 02:35:09.793101 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793108 | orchestrator | 2026-04-07 02:35:09.793116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:09.793123 | orchestrator | Tuesday 07 April 2026 02:35:02 +0000 (0:00:00.230) 0:00:10.138 ********* 2026-04-07 02:35:09.793130 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793137 | orchestrator | 2026-04-07 02:35:09.793153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:09.793161 | orchestrator | Tuesday 07 April 2026 02:35:02 +0000 (0:00:00.272) 0:00:10.410 ********* 2026-04-07 02:35:09.793168 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793175 | orchestrator | 2026-04-07 02:35:09.793181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:09.793188 | orchestrator | Tuesday 07 April 2026 02:35:03 +0000 (0:00:00.247) 0:00:10.658 ********* 2026-04-07 02:35:09.793195 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793202 | orchestrator | 2026-04-07 02:35:09.793209 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-07 02:35:09.793217 | orchestrator | Tuesday 07 April 2026 02:35:03 +0000 (0:00:00.213) 0:00:10.871 ********* 2026-04-07 02:35:09.793223 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793230 | orchestrator | 2026-04-07 02:35:09.793238 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-07 02:35:09.793245 | orchestrator | Tuesday 07 April 2026 02:35:03 +0000 (0:00:00.151) 0:00:11.023 ********* 2026-04-07 02:35:09.793252 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44abcd21-31e3-595d-ad07-7c010500a60a'}}) 2026-04-07 02:35:09.793272 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}}) 2026-04-07 02:35:09.793280 | orchestrator | 2026-04-07 02:35:09.793287 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-07 02:35:09.793293 | orchestrator | Tuesday 07 April 2026 02:35:03 +0000 (0:00:00.205) 0:00:11.229 ********* 2026-04-07 02:35:09.793302 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}) 2026-04-07 02:35:09.793309 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}) 2026-04-07 02:35:09.793316 | orchestrator | 2026-04-07 02:35:09.793323 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-07 02:35:09.793330 | orchestrator | Tuesday 07 April 2026 02:35:05 +0000 (0:00:02.237) 0:00:13.466 ********* 2026-04-07 02:35:09.793336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:09.793344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:09.793351 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793358 | orchestrator | 2026-04-07 02:35:09.793365 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-07 02:35:09.793372 | orchestrator | Tuesday 07 April 2026 02:35:06 +0000 (0:00:00.163) 0:00:13.630 ********* 2026-04-07 02:35:09.793379 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}) 2026-04-07 02:35:09.793385 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}) 2026-04-07 02:35:09.793423 | orchestrator | 2026-04-07 02:35:09.793431 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-07 02:35:09.793438 | orchestrator | Tuesday 07 April 2026 02:35:07 +0000 (0:00:01.588) 0:00:15.219 ********* 2026-04-07 02:35:09.793445 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:09.793452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:09.793459 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793467 | orchestrator | 2026-04-07 02:35:09.793473 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-07 02:35:09.793481 | orchestrator | Tuesday 07 April 2026 02:35:07 +0000 (0:00:00.169) 0:00:15.388 ********* 2026-04-07 02:35:09.793498 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793506 | orchestrator | 2026-04-07 02:35:09.793513 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-07 02:35:09.793519 | orchestrator | Tuesday 07 April 2026 02:35:08 +0000 (0:00:00.385) 0:00:15.773 ********* 2026-04-07 02:35:09.793527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:09.793535 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:09.793543 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793551 | orchestrator | 2026-04-07 02:35:09.793560 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-07 02:35:09.793567 | orchestrator | Tuesday 07 April 2026 02:35:08 +0000 (0:00:00.170) 0:00:15.944 ********* 2026-04-07 02:35:09.793581 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793589 | orchestrator | 2026-04-07 02:35:09.793597 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-07 02:35:09.793605 | orchestrator | Tuesday 07 April 2026 02:35:08 +0000 (0:00:00.156) 0:00:16.100 ********* 2026-04-07 02:35:09.793617 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:09.793625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:09.793633 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793640 | orchestrator | 2026-04-07 02:35:09.793648 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-07 02:35:09.793656 | orchestrator | Tuesday 07 April 2026 02:35:08 +0000 (0:00:00.155) 0:00:16.255 ********* 2026-04-07 02:35:09.793664 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793671 | orchestrator | 2026-04-07 02:35:09.793678 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-07 02:35:09.793685 | orchestrator | Tuesday 07 April 2026 02:35:08 +0000 (0:00:00.146) 0:00:16.402 ********* 2026-04-07 02:35:09.793692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:09.793700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:09.793707 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793714 | orchestrator | 2026-04-07 02:35:09.793721 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-07 02:35:09.793728 | orchestrator | Tuesday 07 April 2026 02:35:09 +0000 (0:00:00.163) 0:00:16.566 ********* 2026-04-07 02:35:09.793735 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:09.793742 | orchestrator | 2026-04-07 02:35:09.793749 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-07 02:35:09.793756 | orchestrator | Tuesday 07 April 2026 02:35:09 +0000 (0:00:00.152) 0:00:16.718 ********* 2026-04-07 02:35:09.793762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:09.793769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:09.793776 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793783 | orchestrator | 2026-04-07 02:35:09.793790 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-07 02:35:09.793797 | orchestrator | Tuesday 07 April 2026 02:35:09 +0000 (0:00:00.146) 0:00:16.865 ********* 2026-04-07 02:35:09.793804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:09.793811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:09.793818 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793826 | orchestrator | 2026-04-07 02:35:09.793832 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-07 02:35:09.793840 | orchestrator | Tuesday 07 April 2026 02:35:09 +0000 (0:00:00.157) 0:00:17.022 ********* 2026-04-07 02:35:09.793847 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:09.793854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:09.793865 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793873 | orchestrator | 2026-04-07 02:35:09.793879 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-07 02:35:09.793886 | orchestrator | Tuesday 07 April 2026 02:35:09 +0000 (0:00:00.153) 0:00:17.176 ********* 2026-04-07 02:35:09.793893 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:09.793900 | orchestrator | 2026-04-07 02:35:09.793907 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-07 02:35:09.793919 | orchestrator | Tuesday 07 April 2026 02:35:09 +0000 (0:00:00.138) 0:00:17.315 ********* 2026-04-07 02:35:16.705991 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.706182 | orchestrator | 2026-04-07 02:35:16.706211 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-07 02:35:16.706232 | orchestrator | Tuesday 07 April 2026 02:35:09 +0000 (0:00:00.153) 0:00:17.469 ********* 2026-04-07 02:35:16.706251 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.706271 | orchestrator | 2026-04-07 02:35:16.706291 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-07 02:35:16.706312 | orchestrator | Tuesday 07 April 2026 02:35:10 +0000 (0:00:00.305) 0:00:17.775 ********* 2026-04-07 02:35:16.706332 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 02:35:16.706354 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-07 02:35:16.706375 | orchestrator | } 2026-04-07 02:35:16.706427 | orchestrator | 2026-04-07 02:35:16.706494 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-07 02:35:16.706515 | orchestrator | Tuesday 07 April 2026 02:35:10 +0000 (0:00:00.150) 0:00:17.925 ********* 2026-04-07 02:35:16.706536 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 02:35:16.706556 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-07 02:35:16.706576 | orchestrator | } 2026-04-07 02:35:16.706596 | orchestrator | 2026-04-07 02:35:16.706616 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-07 02:35:16.706655 | orchestrator | Tuesday 07 April 2026 02:35:10 +0000 (0:00:00.137) 0:00:18.062 ********* 2026-04-07 02:35:16.706674 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 02:35:16.706692 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-07 02:35:16.706711 | orchestrator | } 2026-04-07 02:35:16.706729 | orchestrator | 2026-04-07 02:35:16.706748 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-07 02:35:16.706766 | orchestrator | Tuesday 07 April 2026 02:35:10 +0000 (0:00:00.143) 0:00:18.206 ********* 2026-04-07 02:35:16.706785 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:16.706803 | orchestrator | 2026-04-07 02:35:16.706821 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-07 02:35:16.706839 | orchestrator | Tuesday 07 April 2026 02:35:11 +0000 (0:00:00.726) 0:00:18.932 ********* 2026-04-07 02:35:16.706858 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:16.706876 | orchestrator | 2026-04-07 02:35:16.706894 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-07 02:35:16.706913 | orchestrator | Tuesday 07 April 2026 02:35:11 +0000 (0:00:00.544) 0:00:19.477 ********* 2026-04-07 02:35:16.706934 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:16.706954 | orchestrator | 2026-04-07 02:35:16.706974 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-07 02:35:16.706994 | orchestrator | Tuesday 07 April 2026 02:35:12 +0000 (0:00:00.535) 0:00:20.012 ********* 2026-04-07 02:35:16.707014 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:16.707033 | orchestrator | 2026-04-07 02:35:16.707052 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-07 02:35:16.707071 | orchestrator | Tuesday 07 April 2026 02:35:12 +0000 (0:00:00.178) 0:00:20.190 ********* 2026-04-07 02:35:16.707090 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707107 | orchestrator | 2026-04-07 02:35:16.707127 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-07 02:35:16.707170 | orchestrator | Tuesday 07 April 2026 02:35:12 +0000 (0:00:00.110) 0:00:20.300 ********* 2026-04-07 02:35:16.707187 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707205 | orchestrator | 2026-04-07 02:35:16.707223 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-07 02:35:16.707241 | orchestrator | Tuesday 07 April 2026 02:35:12 +0000 (0:00:00.106) 0:00:20.407 ********* 2026-04-07 02:35:16.707260 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 02:35:16.707279 | orchestrator |  "vgs_report": { 2026-04-07 02:35:16.707300 | orchestrator |  "vg": [] 2026-04-07 02:35:16.707322 | orchestrator |  } 2026-04-07 02:35:16.707343 | orchestrator | } 2026-04-07 02:35:16.707364 | orchestrator | 2026-04-07 02:35:16.707384 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-07 02:35:16.707435 | orchestrator | Tuesday 07 April 2026 02:35:13 +0000 (0:00:00.163) 0:00:20.570 ********* 2026-04-07 02:35:16.707455 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707475 | orchestrator | 2026-04-07 02:35:16.707495 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-07 02:35:16.707515 | orchestrator | Tuesday 07 April 2026 02:35:13 +0000 (0:00:00.143) 0:00:20.714 ********* 2026-04-07 02:35:16.707533 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707553 | orchestrator | 2026-04-07 02:35:16.707571 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-07 02:35:16.707588 | orchestrator | Tuesday 07 April 2026 02:35:13 +0000 (0:00:00.442) 0:00:21.157 ********* 2026-04-07 02:35:16.707606 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707623 | orchestrator | 2026-04-07 02:35:16.707642 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-07 02:35:16.707660 | orchestrator | Tuesday 07 April 2026 02:35:13 +0000 (0:00:00.143) 0:00:21.300 ********* 2026-04-07 02:35:16.707678 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707695 | orchestrator | 2026-04-07 02:35:16.707713 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-07 02:35:16.707731 | orchestrator | Tuesday 07 April 2026 02:35:13 +0000 (0:00:00.127) 0:00:21.427 ********* 2026-04-07 02:35:16.707749 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707767 | orchestrator | 2026-04-07 02:35:16.707785 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-07 02:35:16.707803 | orchestrator | Tuesday 07 April 2026 02:35:14 +0000 (0:00:00.168) 0:00:21.596 ********* 2026-04-07 02:35:16.707821 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707839 | orchestrator | 2026-04-07 02:35:16.707857 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-07 02:35:16.707875 | orchestrator | Tuesday 07 April 2026 02:35:14 +0000 (0:00:00.145) 0:00:21.741 ********* 2026-04-07 02:35:16.707893 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.707911 | orchestrator | 2026-04-07 02:35:16.707929 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-07 02:35:16.707947 | orchestrator | Tuesday 07 April 2026 02:35:14 +0000 (0:00:00.170) 0:00:21.912 ********* 2026-04-07 02:35:16.707994 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708013 | orchestrator | 2026-04-07 02:35:16.708031 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-07 02:35:16.708051 | orchestrator | Tuesday 07 April 2026 02:35:14 +0000 (0:00:00.163) 0:00:22.076 ********* 2026-04-07 02:35:16.708070 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708089 | orchestrator | 2026-04-07 02:35:16.708109 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-07 02:35:16.708127 | orchestrator | Tuesday 07 April 2026 02:35:14 +0000 (0:00:00.159) 0:00:22.236 ********* 2026-04-07 02:35:16.708145 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708163 | orchestrator | 2026-04-07 02:35:16.708183 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-07 02:35:16.708201 | orchestrator | Tuesday 07 April 2026 02:35:14 +0000 (0:00:00.142) 0:00:22.378 ********* 2026-04-07 02:35:16.708235 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708253 | orchestrator | 2026-04-07 02:35:16.708271 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-07 02:35:16.708290 | orchestrator | Tuesday 07 April 2026 02:35:14 +0000 (0:00:00.152) 0:00:22.531 ********* 2026-04-07 02:35:16.708308 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708326 | orchestrator | 2026-04-07 02:35:16.708355 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-07 02:35:16.708374 | orchestrator | Tuesday 07 April 2026 02:35:15 +0000 (0:00:00.146) 0:00:22.678 ********* 2026-04-07 02:35:16.708421 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708440 | orchestrator | 2026-04-07 02:35:16.708459 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-07 02:35:16.708477 | orchestrator | Tuesday 07 April 2026 02:35:15 +0000 (0:00:00.148) 0:00:22.826 ********* 2026-04-07 02:35:16.708496 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708514 | orchestrator | 2026-04-07 02:35:16.708532 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-07 02:35:16.708551 | orchestrator | Tuesday 07 April 2026 02:35:15 +0000 (0:00:00.377) 0:00:23.204 ********* 2026-04-07 02:35:16.708571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:16.708592 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:16.708610 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708629 | orchestrator | 2026-04-07 02:35:16.708647 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-07 02:35:16.708665 | orchestrator | Tuesday 07 April 2026 02:35:15 +0000 (0:00:00.170) 0:00:23.374 ********* 2026-04-07 02:35:16.708684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:16.708703 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:16.708721 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708739 | orchestrator | 2026-04-07 02:35:16.708758 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-07 02:35:16.708777 | orchestrator | Tuesday 07 April 2026 02:35:16 +0000 (0:00:00.161) 0:00:23.536 ********* 2026-04-07 02:35:16.708796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:16.708815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:16.708833 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708852 | orchestrator | 2026-04-07 02:35:16.708871 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-07 02:35:16.708890 | orchestrator | Tuesday 07 April 2026 02:35:16 +0000 (0:00:00.166) 0:00:23.703 ********* 2026-04-07 02:35:16.708908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:16.708926 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:16.708945 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.708963 | orchestrator | 2026-04-07 02:35:16.708982 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-07 02:35:16.709000 | orchestrator | Tuesday 07 April 2026 02:35:16 +0000 (0:00:00.160) 0:00:23.864 ********* 2026-04-07 02:35:16.709031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:16.709050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:16.709068 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:16.709087 | orchestrator | 2026-04-07 02:35:16.709104 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-07 02:35:16.709120 | orchestrator | Tuesday 07 April 2026 02:35:16 +0000 (0:00:00.194) 0:00:24.059 ********* 2026-04-07 02:35:16.709149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:22.633985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:22.634182 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:22.634208 | orchestrator | 2026-04-07 02:35:22.634228 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-07 02:35:22.634248 | orchestrator | Tuesday 07 April 2026 02:35:16 +0000 (0:00:00.169) 0:00:24.228 ********* 2026-04-07 02:35:22.634266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:22.634283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:22.634300 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:22.634316 | orchestrator | 2026-04-07 02:35:22.634352 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-07 02:35:22.634370 | orchestrator | Tuesday 07 April 2026 02:35:16 +0000 (0:00:00.187) 0:00:24.415 ********* 2026-04-07 02:35:22.634386 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:22.634432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:22.634450 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:22.634461 | orchestrator | 2026-04-07 02:35:22.634473 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-07 02:35:22.634484 | orchestrator | Tuesday 07 April 2026 02:35:17 +0000 (0:00:00.187) 0:00:24.602 ********* 2026-04-07 02:35:22.634496 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:22.634508 | orchestrator | 2026-04-07 02:35:22.634519 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-07 02:35:22.634530 | orchestrator | Tuesday 07 April 2026 02:35:17 +0000 (0:00:00.600) 0:00:25.203 ********* 2026-04-07 02:35:22.634541 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:22.634551 | orchestrator | 2026-04-07 02:35:22.634563 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-07 02:35:22.634574 | orchestrator | Tuesday 07 April 2026 02:35:18 +0000 (0:00:00.536) 0:00:25.739 ********* 2026-04-07 02:35:22.634586 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:35:22.634596 | orchestrator | 2026-04-07 02:35:22.634606 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-07 02:35:22.634616 | orchestrator | Tuesday 07 April 2026 02:35:18 +0000 (0:00:00.175) 0:00:25.914 ********* 2026-04-07 02:35:22.634626 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'vg_name': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}) 2026-04-07 02:35:22.634637 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'vg_name': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}) 2026-04-07 02:35:22.634671 | orchestrator | 2026-04-07 02:35:22.634681 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-07 02:35:22.634691 | orchestrator | Tuesday 07 April 2026 02:35:18 +0000 (0:00:00.181) 0:00:26.096 ********* 2026-04-07 02:35:22.634701 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:22.634710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:22.634720 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:22.634730 | orchestrator | 2026-04-07 02:35:22.634739 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-07 02:35:22.634749 | orchestrator | Tuesday 07 April 2026 02:35:18 +0000 (0:00:00.416) 0:00:26.513 ********* 2026-04-07 02:35:22.634759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:22.634769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:22.634778 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:22.634788 | orchestrator | 2026-04-07 02:35:22.634797 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-07 02:35:22.634807 | orchestrator | Tuesday 07 April 2026 02:35:19 +0000 (0:00:00.172) 0:00:26.685 ********* 2026-04-07 02:35:22.634816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 02:35:22.634826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 02:35:22.634836 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:35:22.634845 | orchestrator | 2026-04-07 02:35:22.634855 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-07 02:35:22.634865 | orchestrator | Tuesday 07 April 2026 02:35:19 +0000 (0:00:00.165) 0:00:26.850 ********* 2026-04-07 02:35:22.634893 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 02:35:22.634903 | orchestrator |  "lvm_report": { 2026-04-07 02:35:22.634914 | orchestrator |  "lv": [ 2026-04-07 02:35:22.634924 | orchestrator |  { 2026-04-07 02:35:22.634934 | orchestrator |  "lv_name": "osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a", 2026-04-07 02:35:22.634945 | orchestrator |  "vg_name": "ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a" 2026-04-07 02:35:22.634954 | orchestrator |  }, 2026-04-07 02:35:22.634964 | orchestrator |  { 2026-04-07 02:35:22.634974 | orchestrator |  "lv_name": "osd-block-44abcd21-31e3-595d-ad07-7c010500a60a", 2026-04-07 02:35:22.634983 | orchestrator |  "vg_name": "ceph-44abcd21-31e3-595d-ad07-7c010500a60a" 2026-04-07 02:35:22.634993 | orchestrator |  } 2026-04-07 02:35:22.635003 | orchestrator |  ], 2026-04-07 02:35:22.635012 | orchestrator |  "pv": [ 2026-04-07 02:35:22.635022 | orchestrator |  { 2026-04-07 02:35:22.635032 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-07 02:35:22.635041 | orchestrator |  "vg_name": "ceph-44abcd21-31e3-595d-ad07-7c010500a60a" 2026-04-07 02:35:22.635051 | orchestrator |  }, 2026-04-07 02:35:22.635061 | orchestrator |  { 2026-04-07 02:35:22.635076 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-07 02:35:22.635087 | orchestrator |  "vg_name": "ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a" 2026-04-07 02:35:22.635097 | orchestrator |  } 2026-04-07 02:35:22.635106 | orchestrator |  ] 2026-04-07 02:35:22.635116 | orchestrator |  } 2026-04-07 02:35:22.635126 | orchestrator | } 2026-04-07 02:35:22.635143 | orchestrator | 2026-04-07 02:35:22.635153 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-07 02:35:22.635162 | orchestrator | 2026-04-07 02:35:22.635172 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 02:35:22.635182 | orchestrator | Tuesday 07 April 2026 02:35:19 +0000 (0:00:00.360) 0:00:27.211 ********* 2026-04-07 02:35:22.635192 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-07 02:35:22.635202 | orchestrator | 2026-04-07 02:35:22.635211 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 02:35:22.635221 | orchestrator | Tuesday 07 April 2026 02:35:19 +0000 (0:00:00.300) 0:00:27.512 ********* 2026-04-07 02:35:22.635231 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:22.635240 | orchestrator | 2026-04-07 02:35:22.635250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:22.635259 | orchestrator | Tuesday 07 April 2026 02:35:20 +0000 (0:00:00.258) 0:00:27.770 ********* 2026-04-07 02:35:22.635269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-07 02:35:22.635278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-07 02:35:22.635288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-07 02:35:22.635297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-07 02:35:22.635307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-07 02:35:22.635317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-07 02:35:22.635326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-07 02:35:22.635336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-07 02:35:22.635345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-07 02:35:22.635355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-07 02:35:22.635364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-07 02:35:22.635374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-07 02:35:22.635383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-07 02:35:22.635467 | orchestrator | 2026-04-07 02:35:22.635493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:22.635508 | orchestrator | Tuesday 07 April 2026 02:35:20 +0000 (0:00:00.511) 0:00:28.282 ********* 2026-04-07 02:35:22.635526 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:22.635543 | orchestrator | 2026-04-07 02:35:22.635559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:22.635572 | orchestrator | Tuesday 07 April 2026 02:35:20 +0000 (0:00:00.215) 0:00:28.497 ********* 2026-04-07 02:35:22.635582 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:22.635591 | orchestrator | 2026-04-07 02:35:22.635601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:22.635611 | orchestrator | Tuesday 07 April 2026 02:35:21 +0000 (0:00:00.731) 0:00:29.229 ********* 2026-04-07 02:35:22.635620 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:22.635630 | orchestrator | 2026-04-07 02:35:22.635639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:22.635649 | orchestrator | Tuesday 07 April 2026 02:35:21 +0000 (0:00:00.230) 0:00:29.459 ********* 2026-04-07 02:35:22.635658 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:22.635667 | orchestrator | 2026-04-07 02:35:22.635677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:22.635686 | orchestrator | Tuesday 07 April 2026 02:35:22 +0000 (0:00:00.234) 0:00:29.694 ********* 2026-04-07 02:35:22.635706 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:22.635715 | orchestrator | 2026-04-07 02:35:22.635725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:22.635734 | orchestrator | Tuesday 07 April 2026 02:35:22 +0000 (0:00:00.228) 0:00:29.922 ********* 2026-04-07 02:35:22.635744 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:22.635753 | orchestrator | 2026-04-07 02:35:22.635772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:34.850509 | orchestrator | Tuesday 07 April 2026 02:35:22 +0000 (0:00:00.231) 0:00:30.154 ********* 2026-04-07 02:35:34.850626 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.850643 | orchestrator | 2026-04-07 02:35:34.850656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:34.850669 | orchestrator | Tuesday 07 April 2026 02:35:22 +0000 (0:00:00.213) 0:00:30.367 ********* 2026-04-07 02:35:34.850681 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.850693 | orchestrator | 2026-04-07 02:35:34.850704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:34.850716 | orchestrator | Tuesday 07 April 2026 02:35:23 +0000 (0:00:00.272) 0:00:30.639 ********* 2026-04-07 02:35:34.850728 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945) 2026-04-07 02:35:34.850740 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945) 2026-04-07 02:35:34.850751 | orchestrator | 2026-04-07 02:35:34.850780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:34.850792 | orchestrator | Tuesday 07 April 2026 02:35:23 +0000 (0:00:00.526) 0:00:31.166 ********* 2026-04-07 02:35:34.850802 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c) 2026-04-07 02:35:34.850813 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c) 2026-04-07 02:35:34.850823 | orchestrator | 2026-04-07 02:35:34.850834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:34.850845 | orchestrator | Tuesday 07 April 2026 02:35:24 +0000 (0:00:00.512) 0:00:31.678 ********* 2026-04-07 02:35:34.850856 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f) 2026-04-07 02:35:34.850867 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f) 2026-04-07 02:35:34.850877 | orchestrator | 2026-04-07 02:35:34.850888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:34.850899 | orchestrator | Tuesday 07 April 2026 02:35:24 +0000 (0:00:00.800) 0:00:32.479 ********* 2026-04-07 02:35:34.850910 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc) 2026-04-07 02:35:34.850922 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc) 2026-04-07 02:35:34.850933 | orchestrator | 2026-04-07 02:35:34.850944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:34.850954 | orchestrator | Tuesday 07 April 2026 02:35:26 +0000 (0:00:01.086) 0:00:33.566 ********* 2026-04-07 02:35:34.850966 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 02:35:34.850977 | orchestrator | 2026-04-07 02:35:34.850987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.850998 | orchestrator | Tuesday 07 April 2026 02:35:26 +0000 (0:00:00.392) 0:00:33.958 ********* 2026-04-07 02:35:34.851009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-07 02:35:34.851020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-07 02:35:34.851030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-07 02:35:34.851065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-07 02:35:34.851077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-07 02:35:34.851088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-07 02:35:34.851099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-07 02:35:34.851111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-07 02:35:34.851136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-07 02:35:34.851147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-07 02:35:34.851157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-07 02:35:34.851168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-07 02:35:34.851178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-07 02:35:34.851188 | orchestrator | 2026-04-07 02:35:34.851199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851210 | orchestrator | Tuesday 07 April 2026 02:35:26 +0000 (0:00:00.512) 0:00:34.471 ********* 2026-04-07 02:35:34.851220 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851231 | orchestrator | 2026-04-07 02:35:34.851242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851252 | orchestrator | Tuesday 07 April 2026 02:35:27 +0000 (0:00:00.252) 0:00:34.723 ********* 2026-04-07 02:35:34.851263 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851274 | orchestrator | 2026-04-07 02:35:34.851284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851295 | orchestrator | Tuesday 07 April 2026 02:35:27 +0000 (0:00:00.214) 0:00:34.938 ********* 2026-04-07 02:35:34.851307 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851318 | orchestrator | 2026-04-07 02:35:34.851347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851357 | orchestrator | Tuesday 07 April 2026 02:35:27 +0000 (0:00:00.237) 0:00:35.175 ********* 2026-04-07 02:35:34.851367 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851377 | orchestrator | 2026-04-07 02:35:34.851388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851414 | orchestrator | Tuesday 07 April 2026 02:35:27 +0000 (0:00:00.228) 0:00:35.404 ********* 2026-04-07 02:35:34.851425 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851434 | orchestrator | 2026-04-07 02:35:34.851445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851456 | orchestrator | Tuesday 07 April 2026 02:35:28 +0000 (0:00:00.228) 0:00:35.633 ********* 2026-04-07 02:35:34.851466 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851476 | orchestrator | 2026-04-07 02:35:34.851486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851496 | orchestrator | Tuesday 07 April 2026 02:35:28 +0000 (0:00:00.240) 0:00:35.873 ********* 2026-04-07 02:35:34.851513 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851523 | orchestrator | 2026-04-07 02:35:34.851533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851544 | orchestrator | Tuesday 07 April 2026 02:35:28 +0000 (0:00:00.238) 0:00:36.112 ********* 2026-04-07 02:35:34.851554 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851563 | orchestrator | 2026-04-07 02:35:34.851573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851583 | orchestrator | Tuesday 07 April 2026 02:35:29 +0000 (0:00:00.715) 0:00:36.827 ********* 2026-04-07 02:35:34.851593 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-07 02:35:34.851612 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-07 02:35:34.851622 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-07 02:35:34.851632 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-07 02:35:34.851643 | orchestrator | 2026-04-07 02:35:34.851653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851663 | orchestrator | Tuesday 07 April 2026 02:35:30 +0000 (0:00:00.762) 0:00:37.589 ********* 2026-04-07 02:35:34.851673 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851683 | orchestrator | 2026-04-07 02:35:34.851693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851704 | orchestrator | Tuesday 07 April 2026 02:35:30 +0000 (0:00:00.253) 0:00:37.843 ********* 2026-04-07 02:35:34.851714 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851724 | orchestrator | 2026-04-07 02:35:34.851735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851745 | orchestrator | Tuesday 07 April 2026 02:35:30 +0000 (0:00:00.222) 0:00:38.065 ********* 2026-04-07 02:35:34.851755 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851765 | orchestrator | 2026-04-07 02:35:34.851776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:34.851786 | orchestrator | Tuesday 07 April 2026 02:35:30 +0000 (0:00:00.211) 0:00:38.276 ********* 2026-04-07 02:35:34.851796 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851806 | orchestrator | 2026-04-07 02:35:34.851816 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-07 02:35:34.851826 | orchestrator | Tuesday 07 April 2026 02:35:30 +0000 (0:00:00.217) 0:00:38.494 ********* 2026-04-07 02:35:34.851837 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.851847 | orchestrator | 2026-04-07 02:35:34.851857 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-07 02:35:34.851868 | orchestrator | Tuesday 07 April 2026 02:35:31 +0000 (0:00:00.182) 0:00:38.677 ********* 2026-04-07 02:35:34.851878 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ccafa0da-42f8-5022-b95e-1902d46c646f'}}) 2026-04-07 02:35:34.851888 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8941099b-00de-50f1-81f7-f26159704c09'}}) 2026-04-07 02:35:34.851897 | orchestrator | 2026-04-07 02:35:34.851907 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-07 02:35:34.851917 | orchestrator | Tuesday 07 April 2026 02:35:31 +0000 (0:00:00.274) 0:00:38.952 ********* 2026-04-07 02:35:34.851930 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}) 2026-04-07 02:35:34.851941 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}) 2026-04-07 02:35:34.851951 | orchestrator | 2026-04-07 02:35:34.851961 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-07 02:35:34.851971 | orchestrator | Tuesday 07 April 2026 02:35:33 +0000 (0:00:01.903) 0:00:40.855 ********* 2026-04-07 02:35:34.851981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:34.851992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:34.852003 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:34.852013 | orchestrator | 2026-04-07 02:35:34.852023 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-07 02:35:34.852033 | orchestrator | Tuesday 07 April 2026 02:35:33 +0000 (0:00:00.165) 0:00:41.020 ********* 2026-04-07 02:35:34.852043 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}) 2026-04-07 02:35:34.852069 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}) 2026-04-07 02:35:41.238768 | orchestrator | 2026-04-07 02:35:41.238860 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-07 02:35:41.238871 | orchestrator | Tuesday 07 April 2026 02:35:34 +0000 (0:00:01.348) 0:00:42.369 ********* 2026-04-07 02:35:41.238878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:41.238887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:41.238893 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.238900 | orchestrator | 2026-04-07 02:35:41.238919 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-07 02:35:41.238925 | orchestrator | Tuesday 07 April 2026 02:35:35 +0000 (0:00:00.423) 0:00:42.793 ********* 2026-04-07 02:35:41.238931 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.238937 | orchestrator | 2026-04-07 02:35:41.238943 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-07 02:35:41.238949 | orchestrator | Tuesday 07 April 2026 02:35:35 +0000 (0:00:00.159) 0:00:42.953 ********* 2026-04-07 02:35:41.238955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:41.238961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:41.238967 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.238973 | orchestrator | 2026-04-07 02:35:41.238979 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-07 02:35:41.238984 | orchestrator | Tuesday 07 April 2026 02:35:35 +0000 (0:00:00.160) 0:00:43.114 ********* 2026-04-07 02:35:41.238990 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.238996 | orchestrator | 2026-04-07 02:35:41.239002 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-07 02:35:41.239008 | orchestrator | Tuesday 07 April 2026 02:35:35 +0000 (0:00:00.143) 0:00:43.257 ********* 2026-04-07 02:35:41.239013 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:41.239019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:41.239025 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239032 | orchestrator | 2026-04-07 02:35:41.239038 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-07 02:35:41.239044 | orchestrator | Tuesday 07 April 2026 02:35:35 +0000 (0:00:00.191) 0:00:43.449 ********* 2026-04-07 02:35:41.239050 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239068 | orchestrator | 2026-04-07 02:35:41.239074 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-07 02:35:41.239080 | orchestrator | Tuesday 07 April 2026 02:35:36 +0000 (0:00:00.156) 0:00:43.605 ********* 2026-04-07 02:35:41.239086 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:41.239092 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:41.239098 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239104 | orchestrator | 2026-04-07 02:35:41.239109 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-07 02:35:41.239133 | orchestrator | Tuesday 07 April 2026 02:35:36 +0000 (0:00:00.170) 0:00:43.776 ********* 2026-04-07 02:35:41.239139 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:41.239146 | orchestrator | 2026-04-07 02:35:41.239152 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-07 02:35:41.239158 | orchestrator | Tuesday 07 April 2026 02:35:36 +0000 (0:00:00.159) 0:00:43.936 ********* 2026-04-07 02:35:41.239164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:41.239169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:41.239175 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239181 | orchestrator | 2026-04-07 02:35:41.239187 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-07 02:35:41.239193 | orchestrator | Tuesday 07 April 2026 02:35:36 +0000 (0:00:00.187) 0:00:44.124 ********* 2026-04-07 02:35:41.239198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:41.239204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:41.239210 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239216 | orchestrator | 2026-04-07 02:35:41.239221 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-07 02:35:41.239240 | orchestrator | Tuesday 07 April 2026 02:35:36 +0000 (0:00:00.185) 0:00:44.309 ********* 2026-04-07 02:35:41.239246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:41.239252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:41.239258 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239263 | orchestrator | 2026-04-07 02:35:41.239269 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-07 02:35:41.239275 | orchestrator | Tuesday 07 April 2026 02:35:36 +0000 (0:00:00.194) 0:00:44.504 ********* 2026-04-07 02:35:41.239284 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239290 | orchestrator | 2026-04-07 02:35:41.239295 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-07 02:35:41.239302 | orchestrator | Tuesday 07 April 2026 02:35:37 +0000 (0:00:00.408) 0:00:44.913 ********* 2026-04-07 02:35:41.239309 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239315 | orchestrator | 2026-04-07 02:35:41.239322 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-07 02:35:41.239329 | orchestrator | Tuesday 07 April 2026 02:35:37 +0000 (0:00:00.158) 0:00:45.071 ********* 2026-04-07 02:35:41.239336 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239342 | orchestrator | 2026-04-07 02:35:41.239348 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-07 02:35:41.239355 | orchestrator | Tuesday 07 April 2026 02:35:37 +0000 (0:00:00.165) 0:00:45.236 ********* 2026-04-07 02:35:41.239361 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 02:35:41.239368 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-07 02:35:41.239375 | orchestrator | } 2026-04-07 02:35:41.239381 | orchestrator | 2026-04-07 02:35:41.239388 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-07 02:35:41.239417 | orchestrator | Tuesday 07 April 2026 02:35:37 +0000 (0:00:00.149) 0:00:45.385 ********* 2026-04-07 02:35:41.239424 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 02:35:41.239431 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-07 02:35:41.239443 | orchestrator | } 2026-04-07 02:35:41.239450 | orchestrator | 2026-04-07 02:35:41.239456 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-07 02:35:41.239463 | orchestrator | Tuesday 07 April 2026 02:35:38 +0000 (0:00:00.160) 0:00:45.546 ********* 2026-04-07 02:35:41.239470 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 02:35:41.239477 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-07 02:35:41.239484 | orchestrator | } 2026-04-07 02:35:41.239490 | orchestrator | 2026-04-07 02:35:41.239497 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-07 02:35:41.239504 | orchestrator | Tuesday 07 April 2026 02:35:38 +0000 (0:00:00.193) 0:00:45.740 ********* 2026-04-07 02:35:41.239510 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:41.239517 | orchestrator | 2026-04-07 02:35:41.239524 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-07 02:35:41.239531 | orchestrator | Tuesday 07 April 2026 02:35:38 +0000 (0:00:00.543) 0:00:46.283 ********* 2026-04-07 02:35:41.239537 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:41.239544 | orchestrator | 2026-04-07 02:35:41.239551 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-07 02:35:41.239558 | orchestrator | Tuesday 07 April 2026 02:35:39 +0000 (0:00:00.498) 0:00:46.782 ********* 2026-04-07 02:35:41.239565 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:41.239571 | orchestrator | 2026-04-07 02:35:41.239579 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-07 02:35:41.239585 | orchestrator | Tuesday 07 April 2026 02:35:39 +0000 (0:00:00.540) 0:00:47.323 ********* 2026-04-07 02:35:41.239592 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:41.239599 | orchestrator | 2026-04-07 02:35:41.239605 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-07 02:35:41.239613 | orchestrator | Tuesday 07 April 2026 02:35:39 +0000 (0:00:00.161) 0:00:47.485 ********* 2026-04-07 02:35:41.239620 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239626 | orchestrator | 2026-04-07 02:35:41.239633 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-07 02:35:41.239640 | orchestrator | Tuesday 07 April 2026 02:35:40 +0000 (0:00:00.138) 0:00:47.623 ********* 2026-04-07 02:35:41.239648 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239655 | orchestrator | 2026-04-07 02:35:41.239662 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-07 02:35:41.239669 | orchestrator | Tuesday 07 April 2026 02:35:40 +0000 (0:00:00.358) 0:00:47.981 ********* 2026-04-07 02:35:41.239675 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 02:35:41.239681 | orchestrator |  "vgs_report": { 2026-04-07 02:35:41.239687 | orchestrator |  "vg": [] 2026-04-07 02:35:41.239693 | orchestrator |  } 2026-04-07 02:35:41.239699 | orchestrator | } 2026-04-07 02:35:41.239705 | orchestrator | 2026-04-07 02:35:41.239711 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-07 02:35:41.239717 | orchestrator | Tuesday 07 April 2026 02:35:40 +0000 (0:00:00.160) 0:00:48.142 ********* 2026-04-07 02:35:41.239722 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239728 | orchestrator | 2026-04-07 02:35:41.239734 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-07 02:35:41.239739 | orchestrator | Tuesday 07 April 2026 02:35:40 +0000 (0:00:00.145) 0:00:48.287 ********* 2026-04-07 02:35:41.239745 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239751 | orchestrator | 2026-04-07 02:35:41.239756 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-07 02:35:41.239762 | orchestrator | Tuesday 07 April 2026 02:35:40 +0000 (0:00:00.149) 0:00:48.437 ********* 2026-04-07 02:35:41.239768 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239790 | orchestrator | 2026-04-07 02:35:41.239796 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-07 02:35:41.239802 | orchestrator | Tuesday 07 April 2026 02:35:41 +0000 (0:00:00.177) 0:00:48.615 ********* 2026-04-07 02:35:41.239812 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:41.239818 | orchestrator | 2026-04-07 02:35:41.239828 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-07 02:35:46.525752 | orchestrator | Tuesday 07 April 2026 02:35:41 +0000 (0:00:00.145) 0:00:48.761 ********* 2026-04-07 02:35:46.525847 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.525860 | orchestrator | 2026-04-07 02:35:46.525871 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-07 02:35:46.525880 | orchestrator | Tuesday 07 April 2026 02:35:41 +0000 (0:00:00.149) 0:00:48.911 ********* 2026-04-07 02:35:46.525889 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.525898 | orchestrator | 2026-04-07 02:35:46.525907 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-07 02:35:46.525916 | orchestrator | Tuesday 07 April 2026 02:35:41 +0000 (0:00:00.180) 0:00:49.091 ********* 2026-04-07 02:35:46.525924 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.525933 | orchestrator | 2026-04-07 02:35:46.525956 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-07 02:35:46.525965 | orchestrator | Tuesday 07 April 2026 02:35:41 +0000 (0:00:00.163) 0:00:49.255 ********* 2026-04-07 02:35:46.525974 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.525983 | orchestrator | 2026-04-07 02:35:46.525991 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-07 02:35:46.526000 | orchestrator | Tuesday 07 April 2026 02:35:41 +0000 (0:00:00.168) 0:00:49.423 ********* 2026-04-07 02:35:46.526008 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526070 | orchestrator | 2026-04-07 02:35:46.526081 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-07 02:35:46.526090 | orchestrator | Tuesday 07 April 2026 02:35:42 +0000 (0:00:00.142) 0:00:49.566 ********* 2026-04-07 02:35:46.526098 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526107 | orchestrator | 2026-04-07 02:35:46.526116 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-07 02:35:46.526125 | orchestrator | Tuesday 07 April 2026 02:35:42 +0000 (0:00:00.359) 0:00:49.925 ********* 2026-04-07 02:35:46.526134 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526142 | orchestrator | 2026-04-07 02:35:46.526151 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-07 02:35:46.526159 | orchestrator | Tuesday 07 April 2026 02:35:42 +0000 (0:00:00.152) 0:00:50.078 ********* 2026-04-07 02:35:46.526168 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526177 | orchestrator | 2026-04-07 02:35:46.526185 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-07 02:35:46.526194 | orchestrator | Tuesday 07 April 2026 02:35:42 +0000 (0:00:00.139) 0:00:50.217 ********* 2026-04-07 02:35:46.526202 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526211 | orchestrator | 2026-04-07 02:35:46.526220 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-07 02:35:46.526229 | orchestrator | Tuesday 07 April 2026 02:35:42 +0000 (0:00:00.139) 0:00:50.356 ********* 2026-04-07 02:35:46.526237 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526246 | orchestrator | 2026-04-07 02:35:46.526255 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-07 02:35:46.526263 | orchestrator | Tuesday 07 April 2026 02:35:42 +0000 (0:00:00.159) 0:00:50.516 ********* 2026-04-07 02:35:46.526273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.526283 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.526292 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526303 | orchestrator | 2026-04-07 02:35:46.526313 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-07 02:35:46.526343 | orchestrator | Tuesday 07 April 2026 02:35:43 +0000 (0:00:00.179) 0:00:50.696 ********* 2026-04-07 02:35:46.526353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.526363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.526373 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526384 | orchestrator | 2026-04-07 02:35:46.526394 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-07 02:35:46.526426 | orchestrator | Tuesday 07 April 2026 02:35:43 +0000 (0:00:00.208) 0:00:50.904 ********* 2026-04-07 02:35:46.526436 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.526447 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.526456 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526467 | orchestrator | 2026-04-07 02:35:46.526477 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-07 02:35:46.526488 | orchestrator | Tuesday 07 April 2026 02:35:43 +0000 (0:00:00.192) 0:00:51.096 ********* 2026-04-07 02:35:46.526498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.526508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.526518 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526529 | orchestrator | 2026-04-07 02:35:46.526556 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-07 02:35:46.526567 | orchestrator | Tuesday 07 April 2026 02:35:43 +0000 (0:00:00.176) 0:00:51.273 ********* 2026-04-07 02:35:46.526578 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.526588 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.526598 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526608 | orchestrator | 2026-04-07 02:35:46.526623 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-07 02:35:46.526634 | orchestrator | Tuesday 07 April 2026 02:35:43 +0000 (0:00:00.189) 0:00:51.463 ********* 2026-04-07 02:35:46.526644 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.526654 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.526665 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526674 | orchestrator | 2026-04-07 02:35:46.526683 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-07 02:35:46.526692 | orchestrator | Tuesday 07 April 2026 02:35:44 +0000 (0:00:00.176) 0:00:51.639 ********* 2026-04-07 02:35:46.526701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.526710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.526718 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526733 | orchestrator | 2026-04-07 02:35:46.526742 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-07 02:35:46.526753 | orchestrator | Tuesday 07 April 2026 02:35:44 +0000 (0:00:00.451) 0:00:52.091 ********* 2026-04-07 02:35:46.526768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.526785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.526806 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.526826 | orchestrator | 2026-04-07 02:35:46.526842 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-07 02:35:46.526855 | orchestrator | Tuesday 07 April 2026 02:35:44 +0000 (0:00:00.210) 0:00:52.302 ********* 2026-04-07 02:35:46.526870 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:46.526883 | orchestrator | 2026-04-07 02:35:46.526898 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-07 02:35:46.526912 | orchestrator | Tuesday 07 April 2026 02:35:45 +0000 (0:00:00.516) 0:00:52.819 ********* 2026-04-07 02:35:46.526928 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:46.526942 | orchestrator | 2026-04-07 02:35:46.526956 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-07 02:35:46.526971 | orchestrator | Tuesday 07 April 2026 02:35:45 +0000 (0:00:00.531) 0:00:53.351 ********* 2026-04-07 02:35:46.526985 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:35:46.526994 | orchestrator | 2026-04-07 02:35:46.527002 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-07 02:35:46.527011 | orchestrator | Tuesday 07 April 2026 02:35:45 +0000 (0:00:00.169) 0:00:53.520 ********* 2026-04-07 02:35:46.527020 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'vg_name': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}) 2026-04-07 02:35:46.527029 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'vg_name': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}) 2026-04-07 02:35:46.527038 | orchestrator | 2026-04-07 02:35:46.527047 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-07 02:35:46.527055 | orchestrator | Tuesday 07 April 2026 02:35:46 +0000 (0:00:00.191) 0:00:53.712 ********* 2026-04-07 02:35:46.527064 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.527072 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:46.527081 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:46.527090 | orchestrator | 2026-04-07 02:35:46.527098 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-07 02:35:46.527107 | orchestrator | Tuesday 07 April 2026 02:35:46 +0000 (0:00:00.170) 0:00:53.882 ********* 2026-04-07 02:35:46.527116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:46.527133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:53.776925 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:53.777004 | orchestrator | 2026-04-07 02:35:53.777012 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-07 02:35:53.777019 | orchestrator | Tuesday 07 April 2026 02:35:46 +0000 (0:00:00.165) 0:00:54.048 ********* 2026-04-07 02:35:53.777024 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 02:35:53.777057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 02:35:53.777063 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:35:53.777067 | orchestrator | 2026-04-07 02:35:53.777072 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-07 02:35:53.777077 | orchestrator | Tuesday 07 April 2026 02:35:46 +0000 (0:00:00.180) 0:00:54.229 ********* 2026-04-07 02:35:53.777082 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 02:35:53.777087 | orchestrator |  "lvm_report": { 2026-04-07 02:35:53.777093 | orchestrator |  "lv": [ 2026-04-07 02:35:53.777098 | orchestrator |  { 2026-04-07 02:35:53.777103 | orchestrator |  "lv_name": "osd-block-8941099b-00de-50f1-81f7-f26159704c09", 2026-04-07 02:35:53.777108 | orchestrator |  "vg_name": "ceph-8941099b-00de-50f1-81f7-f26159704c09" 2026-04-07 02:35:53.777113 | orchestrator |  }, 2026-04-07 02:35:53.777118 | orchestrator |  { 2026-04-07 02:35:53.777122 | orchestrator |  "lv_name": "osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f", 2026-04-07 02:35:53.777127 | orchestrator |  "vg_name": "ceph-ccafa0da-42f8-5022-b95e-1902d46c646f" 2026-04-07 02:35:53.777132 | orchestrator |  } 2026-04-07 02:35:53.777136 | orchestrator |  ], 2026-04-07 02:35:53.777141 | orchestrator |  "pv": [ 2026-04-07 02:35:53.777145 | orchestrator |  { 2026-04-07 02:35:53.777150 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-07 02:35:53.777155 | orchestrator |  "vg_name": "ceph-ccafa0da-42f8-5022-b95e-1902d46c646f" 2026-04-07 02:35:53.777160 | orchestrator |  }, 2026-04-07 02:35:53.777165 | orchestrator |  { 2026-04-07 02:35:53.777169 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-07 02:35:53.777174 | orchestrator |  "vg_name": "ceph-8941099b-00de-50f1-81f7-f26159704c09" 2026-04-07 02:35:53.777178 | orchestrator |  } 2026-04-07 02:35:53.777183 | orchestrator |  ] 2026-04-07 02:35:53.777188 | orchestrator |  } 2026-04-07 02:35:53.777193 | orchestrator | } 2026-04-07 02:35:53.777198 | orchestrator | 2026-04-07 02:35:53.777202 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-07 02:35:53.777207 | orchestrator | 2026-04-07 02:35:53.777212 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 02:35:53.777216 | orchestrator | Tuesday 07 April 2026 02:35:47 +0000 (0:00:00.344) 0:00:54.573 ********* 2026-04-07 02:35:53.777221 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-07 02:35:53.777226 | orchestrator | 2026-04-07 02:35:53.777230 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-07 02:35:53.777235 | orchestrator | Tuesday 07 April 2026 02:35:47 +0000 (0:00:00.802) 0:00:55.375 ********* 2026-04-07 02:35:53.777240 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:35:53.777244 | orchestrator | 2026-04-07 02:35:53.777249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777253 | orchestrator | Tuesday 07 April 2026 02:35:48 +0000 (0:00:00.268) 0:00:55.643 ********* 2026-04-07 02:35:53.777258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-07 02:35:53.777263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-07 02:35:53.777267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-07 02:35:53.777272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-07 02:35:53.777276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-07 02:35:53.777281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-07 02:35:53.777285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-07 02:35:53.777295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-07 02:35:53.777299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-07 02:35:53.777304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-07 02:35:53.777309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-07 02:35:53.777313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-07 02:35:53.777318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-07 02:35:53.777322 | orchestrator | 2026-04-07 02:35:53.777327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777331 | orchestrator | Tuesday 07 April 2026 02:35:48 +0000 (0:00:00.467) 0:00:56.110 ********* 2026-04-07 02:35:53.777336 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:35:53.777341 | orchestrator | 2026-04-07 02:35:53.777345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777350 | orchestrator | Tuesday 07 April 2026 02:35:48 +0000 (0:00:00.233) 0:00:56.344 ********* 2026-04-07 02:35:53.777354 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:35:53.777359 | orchestrator | 2026-04-07 02:35:53.777364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777378 | orchestrator | Tuesday 07 April 2026 02:35:49 +0000 (0:00:00.220) 0:00:56.564 ********* 2026-04-07 02:35:53.777383 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:35:53.777388 | orchestrator | 2026-04-07 02:35:53.777392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777397 | orchestrator | Tuesday 07 April 2026 02:35:49 +0000 (0:00:00.229) 0:00:56.794 ********* 2026-04-07 02:35:53.777436 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:35:53.777442 | orchestrator | 2026-04-07 02:35:53.777449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777458 | orchestrator | Tuesday 07 April 2026 02:35:49 +0000 (0:00:00.235) 0:00:57.029 ********* 2026-04-07 02:35:53.777465 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:35:53.777472 | orchestrator | 2026-04-07 02:35:53.777480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777488 | orchestrator | Tuesday 07 April 2026 02:35:49 +0000 (0:00:00.253) 0:00:57.282 ********* 2026-04-07 02:35:53.777497 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:35:53.777505 | orchestrator | 2026-04-07 02:35:53.777514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777522 | orchestrator | Tuesday 07 April 2026 02:35:49 +0000 (0:00:00.227) 0:00:57.510 ********* 2026-04-07 02:35:53.777530 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:35:53.777539 | orchestrator | 2026-04-07 02:35:53.777548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777555 | orchestrator | Tuesday 07 April 2026 02:35:50 +0000 (0:00:00.225) 0:00:57.736 ********* 2026-04-07 02:35:53.777561 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:35:53.777566 | orchestrator | 2026-04-07 02:35:53.777572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777577 | orchestrator | Tuesday 07 April 2026 02:35:50 +0000 (0:00:00.748) 0:00:58.484 ********* 2026-04-07 02:35:53.777582 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2) 2026-04-07 02:35:53.777589 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2) 2026-04-07 02:35:53.777594 | orchestrator | 2026-04-07 02:35:53.777599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777605 | orchestrator | Tuesday 07 April 2026 02:35:51 +0000 (0:00:00.487) 0:00:58.971 ********* 2026-04-07 02:35:53.777634 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7) 2026-04-07 02:35:53.777646 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7) 2026-04-07 02:35:53.777651 | orchestrator | 2026-04-07 02:35:53.777657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777662 | orchestrator | Tuesday 07 April 2026 02:35:51 +0000 (0:00:00.503) 0:00:59.475 ********* 2026-04-07 02:35:53.777668 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d) 2026-04-07 02:35:53.777673 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d) 2026-04-07 02:35:53.777679 | orchestrator | 2026-04-07 02:35:53.777684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777689 | orchestrator | Tuesday 07 April 2026 02:35:52 +0000 (0:00:00.482) 0:00:59.958 ********* 2026-04-07 02:35:53.777694 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599) 2026-04-07 02:35:53.777700 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599) 2026-04-07 02:35:53.777705 | orchestrator | 2026-04-07 02:35:53.777711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-07 02:35:53.777716 | orchestrator | Tuesday 07 April 2026 02:35:52 +0000 (0:00:00.479) 0:01:00.438 ********* 2026-04-07 02:35:53.777721 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-07 02:35:53.777726 | orchestrator | 2026-04-07 02:35:53.777731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:35:53.777737 | orchestrator | Tuesday 07 April 2026 02:35:53 +0000 (0:00:00.383) 0:01:00.822 ********* 2026-04-07 02:35:53.777742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-07 02:35:53.777747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-07 02:35:53.777752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-07 02:35:53.777758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-07 02:35:53.777763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-07 02:35:53.777768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-07 02:35:53.777774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-07 02:35:53.777778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-07 02:35:53.777784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-07 02:35:53.777789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-07 02:35:53.777794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-07 02:35:53.777805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-07 02:36:03.412732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-07 02:36:03.412852 | orchestrator | 2026-04-07 02:36:03.412877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.412895 | orchestrator | Tuesday 07 April 2026 02:35:53 +0000 (0:00:00.471) 0:01:01.294 ********* 2026-04-07 02:36:03.412910 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.412927 | orchestrator | 2026-04-07 02:36:03.412942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.412978 | orchestrator | Tuesday 07 April 2026 02:35:54 +0000 (0:00:00.242) 0:01:01.536 ********* 2026-04-07 02:36:03.412995 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413035 | orchestrator | 2026-04-07 02:36:03.413045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413053 | orchestrator | Tuesday 07 April 2026 02:35:54 +0000 (0:00:00.232) 0:01:01.768 ********* 2026-04-07 02:36:03.413062 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413071 | orchestrator | 2026-04-07 02:36:03.413079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413088 | orchestrator | Tuesday 07 April 2026 02:35:54 +0000 (0:00:00.226) 0:01:01.995 ********* 2026-04-07 02:36:03.413097 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413105 | orchestrator | 2026-04-07 02:36:03.413114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413123 | orchestrator | Tuesday 07 April 2026 02:35:54 +0000 (0:00:00.241) 0:01:02.236 ********* 2026-04-07 02:36:03.413131 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413140 | orchestrator | 2026-04-07 02:36:03.413148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413157 | orchestrator | Tuesday 07 April 2026 02:35:55 +0000 (0:00:00.775) 0:01:03.011 ********* 2026-04-07 02:36:03.413165 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413174 | orchestrator | 2026-04-07 02:36:03.413182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413191 | orchestrator | Tuesday 07 April 2026 02:35:55 +0000 (0:00:00.294) 0:01:03.306 ********* 2026-04-07 02:36:03.413203 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413217 | orchestrator | 2026-04-07 02:36:03.413231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413245 | orchestrator | Tuesday 07 April 2026 02:35:56 +0000 (0:00:00.244) 0:01:03.551 ********* 2026-04-07 02:36:03.413260 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413276 | orchestrator | 2026-04-07 02:36:03.413292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413308 | orchestrator | Tuesday 07 April 2026 02:35:56 +0000 (0:00:00.230) 0:01:03.781 ********* 2026-04-07 02:36:03.413323 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-07 02:36:03.413337 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-07 02:36:03.413349 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-07 02:36:03.413359 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-07 02:36:03.413370 | orchestrator | 2026-04-07 02:36:03.413380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413390 | orchestrator | Tuesday 07 April 2026 02:35:57 +0000 (0:00:00.790) 0:01:04.571 ********* 2026-04-07 02:36:03.413430 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413442 | orchestrator | 2026-04-07 02:36:03.413452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413462 | orchestrator | Tuesday 07 April 2026 02:35:57 +0000 (0:00:00.237) 0:01:04.809 ********* 2026-04-07 02:36:03.413473 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413483 | orchestrator | 2026-04-07 02:36:03.413494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413504 | orchestrator | Tuesday 07 April 2026 02:35:57 +0000 (0:00:00.208) 0:01:05.018 ********* 2026-04-07 02:36:03.413514 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413525 | orchestrator | 2026-04-07 02:36:03.413535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-07 02:36:03.413546 | orchestrator | Tuesday 07 April 2026 02:35:57 +0000 (0:00:00.224) 0:01:05.242 ********* 2026-04-07 02:36:03.413556 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413566 | orchestrator | 2026-04-07 02:36:03.413576 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-07 02:36:03.413586 | orchestrator | Tuesday 07 April 2026 02:35:57 +0000 (0:00:00.209) 0:01:05.451 ********* 2026-04-07 02:36:03.413596 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413607 | orchestrator | 2026-04-07 02:36:03.413625 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-07 02:36:03.413635 | orchestrator | Tuesday 07 April 2026 02:35:58 +0000 (0:00:00.151) 0:01:05.603 ********* 2026-04-07 02:36:03.413648 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '754aebfc-d76c-537f-941d-8ad36483cdb2'}}) 2026-04-07 02:36:03.413658 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed7b856a-23c6-522d-bad3-e57b6a18196d'}}) 2026-04-07 02:36:03.413669 | orchestrator | 2026-04-07 02:36:03.413680 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-07 02:36:03.413690 | orchestrator | Tuesday 07 April 2026 02:35:58 +0000 (0:00:00.207) 0:01:05.811 ********* 2026-04-07 02:36:03.413702 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}) 2026-04-07 02:36:03.413712 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}) 2026-04-07 02:36:03.413722 | orchestrator | 2026-04-07 02:36:03.413736 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-07 02:36:03.413776 | orchestrator | Tuesday 07 April 2026 02:36:00 +0000 (0:00:01.854) 0:01:07.665 ********* 2026-04-07 02:36:03.413796 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:03.413811 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:03.413825 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.413839 | orchestrator | 2026-04-07 02:36:03.413860 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-07 02:36:03.413875 | orchestrator | Tuesday 07 April 2026 02:36:00 +0000 (0:00:00.418) 0:01:08.083 ********* 2026-04-07 02:36:03.413890 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}) 2026-04-07 02:36:03.413904 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}) 2026-04-07 02:36:03.413919 | orchestrator | 2026-04-07 02:36:03.413934 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-07 02:36:03.413958 | orchestrator | Tuesday 07 April 2026 02:36:01 +0000 (0:00:01.360) 0:01:09.444 ********* 2026-04-07 02:36:03.413973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:03.413987 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:03.414001 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.414083 | orchestrator | 2026-04-07 02:36:03.414103 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-07 02:36:03.414113 | orchestrator | Tuesday 07 April 2026 02:36:02 +0000 (0:00:00.173) 0:01:09.618 ********* 2026-04-07 02:36:03.414121 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.414130 | orchestrator | 2026-04-07 02:36:03.414138 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-07 02:36:03.414147 | orchestrator | Tuesday 07 April 2026 02:36:02 +0000 (0:00:00.160) 0:01:09.778 ********* 2026-04-07 02:36:03.414155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:03.414164 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:03.414183 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.414192 | orchestrator | 2026-04-07 02:36:03.414201 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-07 02:36:03.414209 | orchestrator | Tuesday 07 April 2026 02:36:02 +0000 (0:00:00.175) 0:01:09.953 ********* 2026-04-07 02:36:03.414218 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.414226 | orchestrator | 2026-04-07 02:36:03.414235 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-07 02:36:03.414244 | orchestrator | Tuesday 07 April 2026 02:36:02 +0000 (0:00:00.160) 0:01:10.114 ********* 2026-04-07 02:36:03.414252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:03.414261 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:03.414270 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.414278 | orchestrator | 2026-04-07 02:36:03.414287 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-07 02:36:03.414295 | orchestrator | Tuesday 07 April 2026 02:36:02 +0000 (0:00:00.167) 0:01:10.282 ********* 2026-04-07 02:36:03.414304 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.414312 | orchestrator | 2026-04-07 02:36:03.414321 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-07 02:36:03.414329 | orchestrator | Tuesday 07 April 2026 02:36:02 +0000 (0:00:00.158) 0:01:10.440 ********* 2026-04-07 02:36:03.414338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:03.414347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:03.414356 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:03.414364 | orchestrator | 2026-04-07 02:36:03.414373 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-07 02:36:03.414382 | orchestrator | Tuesday 07 April 2026 02:36:03 +0000 (0:00:00.194) 0:01:10.635 ********* 2026-04-07 02:36:03.414390 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:03.414399 | orchestrator | 2026-04-07 02:36:03.414431 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-07 02:36:03.414440 | orchestrator | Tuesday 07 April 2026 02:36:03 +0000 (0:00:00.144) 0:01:10.779 ********* 2026-04-07 02:36:03.414461 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:10.676230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:10.676329 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676344 | orchestrator | 2026-04-07 02:36:10.676355 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-07 02:36:10.676367 | orchestrator | Tuesday 07 April 2026 02:36:03 +0000 (0:00:00.154) 0:01:10.934 ********* 2026-04-07 02:36:10.676393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:10.676442 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:10.676450 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676456 | orchestrator | 2026-04-07 02:36:10.676462 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-07 02:36:10.676468 | orchestrator | Tuesday 07 April 2026 02:36:03 +0000 (0:00:00.180) 0:01:11.114 ********* 2026-04-07 02:36:10.676491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:10.676497 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:10.676503 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676509 | orchestrator | 2026-04-07 02:36:10.676515 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-07 02:36:10.676520 | orchestrator | Tuesday 07 April 2026 02:36:03 +0000 (0:00:00.412) 0:01:11.527 ********* 2026-04-07 02:36:10.676526 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676531 | orchestrator | 2026-04-07 02:36:10.676537 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-07 02:36:10.676543 | orchestrator | Tuesday 07 April 2026 02:36:04 +0000 (0:00:00.146) 0:01:11.673 ********* 2026-04-07 02:36:10.676549 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676555 | orchestrator | 2026-04-07 02:36:10.676561 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-07 02:36:10.676567 | orchestrator | Tuesday 07 April 2026 02:36:04 +0000 (0:00:00.148) 0:01:11.822 ********* 2026-04-07 02:36:10.676572 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676578 | orchestrator | 2026-04-07 02:36:10.676584 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-07 02:36:10.676590 | orchestrator | Tuesday 07 April 2026 02:36:04 +0000 (0:00:00.160) 0:01:11.983 ********* 2026-04-07 02:36:10.676595 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 02:36:10.676602 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-07 02:36:10.676608 | orchestrator | } 2026-04-07 02:36:10.676614 | orchestrator | 2026-04-07 02:36:10.676620 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-07 02:36:10.676626 | orchestrator | Tuesday 07 April 2026 02:36:04 +0000 (0:00:00.174) 0:01:12.157 ********* 2026-04-07 02:36:10.676631 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 02:36:10.676637 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-07 02:36:10.676643 | orchestrator | } 2026-04-07 02:36:10.676649 | orchestrator | 2026-04-07 02:36:10.676654 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-07 02:36:10.676660 | orchestrator | Tuesday 07 April 2026 02:36:04 +0000 (0:00:00.172) 0:01:12.330 ********* 2026-04-07 02:36:10.676666 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 02:36:10.676672 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-07 02:36:10.676677 | orchestrator | } 2026-04-07 02:36:10.676683 | orchestrator | 2026-04-07 02:36:10.676689 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-07 02:36:10.676695 | orchestrator | Tuesday 07 April 2026 02:36:04 +0000 (0:00:00.160) 0:01:12.490 ********* 2026-04-07 02:36:10.676700 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:10.676706 | orchestrator | 2026-04-07 02:36:10.676712 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-07 02:36:10.676718 | orchestrator | Tuesday 07 April 2026 02:36:05 +0000 (0:00:00.609) 0:01:13.100 ********* 2026-04-07 02:36:10.676724 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:10.676729 | orchestrator | 2026-04-07 02:36:10.676735 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-07 02:36:10.676741 | orchestrator | Tuesday 07 April 2026 02:36:06 +0000 (0:00:00.616) 0:01:13.717 ********* 2026-04-07 02:36:10.676746 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:10.676752 | orchestrator | 2026-04-07 02:36:10.676758 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-07 02:36:10.676763 | orchestrator | Tuesday 07 April 2026 02:36:06 +0000 (0:00:00.536) 0:01:14.253 ********* 2026-04-07 02:36:10.676769 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:10.676776 | orchestrator | 2026-04-07 02:36:10.676783 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-07 02:36:10.676795 | orchestrator | Tuesday 07 April 2026 02:36:06 +0000 (0:00:00.154) 0:01:14.408 ********* 2026-04-07 02:36:10.676802 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676808 | orchestrator | 2026-04-07 02:36:10.676815 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-07 02:36:10.676822 | orchestrator | Tuesday 07 April 2026 02:36:07 +0000 (0:00:00.141) 0:01:14.549 ********* 2026-04-07 02:36:10.676828 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676835 | orchestrator | 2026-04-07 02:36:10.676841 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-07 02:36:10.676848 | orchestrator | Tuesday 07 April 2026 02:36:07 +0000 (0:00:00.417) 0:01:14.967 ********* 2026-04-07 02:36:10.676855 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 02:36:10.676862 | orchestrator |  "vgs_report": { 2026-04-07 02:36:10.676869 | orchestrator |  "vg": [] 2026-04-07 02:36:10.676891 | orchestrator |  } 2026-04-07 02:36:10.676898 | orchestrator | } 2026-04-07 02:36:10.676905 | orchestrator | 2026-04-07 02:36:10.676912 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-07 02:36:10.676919 | orchestrator | Tuesday 07 April 2026 02:36:07 +0000 (0:00:00.181) 0:01:15.148 ********* 2026-04-07 02:36:10.676925 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676932 | orchestrator | 2026-04-07 02:36:10.676939 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-07 02:36:10.676946 | orchestrator | Tuesday 07 April 2026 02:36:07 +0000 (0:00:00.223) 0:01:15.372 ********* 2026-04-07 02:36:10.676958 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676965 | orchestrator | 2026-04-07 02:36:10.676972 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-07 02:36:10.676978 | orchestrator | Tuesday 07 April 2026 02:36:08 +0000 (0:00:00.214) 0:01:15.587 ********* 2026-04-07 02:36:10.676985 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.676992 | orchestrator | 2026-04-07 02:36:10.676998 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-07 02:36:10.677005 | orchestrator | Tuesday 07 April 2026 02:36:08 +0000 (0:00:00.158) 0:01:15.745 ********* 2026-04-07 02:36:10.677011 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677018 | orchestrator | 2026-04-07 02:36:10.677024 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-07 02:36:10.677031 | orchestrator | Tuesday 07 April 2026 02:36:08 +0000 (0:00:00.157) 0:01:15.902 ********* 2026-04-07 02:36:10.677038 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677044 | orchestrator | 2026-04-07 02:36:10.677051 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-07 02:36:10.677058 | orchestrator | Tuesday 07 April 2026 02:36:08 +0000 (0:00:00.194) 0:01:16.097 ********* 2026-04-07 02:36:10.677064 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677071 | orchestrator | 2026-04-07 02:36:10.677078 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-07 02:36:10.677084 | orchestrator | Tuesday 07 April 2026 02:36:08 +0000 (0:00:00.136) 0:01:16.233 ********* 2026-04-07 02:36:10.677091 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677098 | orchestrator | 2026-04-07 02:36:10.677105 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-07 02:36:10.677111 | orchestrator | Tuesday 07 April 2026 02:36:08 +0000 (0:00:00.141) 0:01:16.374 ********* 2026-04-07 02:36:10.677118 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677125 | orchestrator | 2026-04-07 02:36:10.677132 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-07 02:36:10.677139 | orchestrator | Tuesday 07 April 2026 02:36:08 +0000 (0:00:00.145) 0:01:16.520 ********* 2026-04-07 02:36:10.677144 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677150 | orchestrator | 2026-04-07 02:36:10.677156 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-07 02:36:10.677161 | orchestrator | Tuesday 07 April 2026 02:36:09 +0000 (0:00:00.150) 0:01:16.670 ********* 2026-04-07 02:36:10.677171 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677177 | orchestrator | 2026-04-07 02:36:10.677183 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-07 02:36:10.677188 | orchestrator | Tuesday 07 April 2026 02:36:09 +0000 (0:00:00.148) 0:01:16.819 ********* 2026-04-07 02:36:10.677194 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677200 | orchestrator | 2026-04-07 02:36:10.677205 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-07 02:36:10.677211 | orchestrator | Tuesday 07 April 2026 02:36:09 +0000 (0:00:00.385) 0:01:17.205 ********* 2026-04-07 02:36:10.677217 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677222 | orchestrator | 2026-04-07 02:36:10.677228 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-07 02:36:10.677234 | orchestrator | Tuesday 07 April 2026 02:36:09 +0000 (0:00:00.165) 0:01:17.371 ********* 2026-04-07 02:36:10.677240 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677245 | orchestrator | 2026-04-07 02:36:10.677251 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-07 02:36:10.677256 | orchestrator | Tuesday 07 April 2026 02:36:09 +0000 (0:00:00.147) 0:01:17.518 ********* 2026-04-07 02:36:10.677262 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677268 | orchestrator | 2026-04-07 02:36:10.677273 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-07 02:36:10.677279 | orchestrator | Tuesday 07 April 2026 02:36:10 +0000 (0:00:00.157) 0:01:17.675 ********* 2026-04-07 02:36:10.677285 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:10.677291 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:10.677297 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677302 | orchestrator | 2026-04-07 02:36:10.677308 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-07 02:36:10.677314 | orchestrator | Tuesday 07 April 2026 02:36:10 +0000 (0:00:00.172) 0:01:17.847 ********* 2026-04-07 02:36:10.677319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:10.677325 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:10.677331 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:10.677337 | orchestrator | 2026-04-07 02:36:10.677342 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-07 02:36:10.677348 | orchestrator | Tuesday 07 April 2026 02:36:10 +0000 (0:00:00.173) 0:01:18.021 ********* 2026-04-07 02:36:10.677358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.910957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.911107 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.911137 | orchestrator | 2026-04-07 02:36:13.911181 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-07 02:36:13.911204 | orchestrator | Tuesday 07 April 2026 02:36:10 +0000 (0:00:00.179) 0:01:18.201 ********* 2026-04-07 02:36:13.911223 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.911243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.911300 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.911324 | orchestrator | 2026-04-07 02:36:13.911343 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-07 02:36:13.911360 | orchestrator | Tuesday 07 April 2026 02:36:10 +0000 (0:00:00.147) 0:01:18.348 ********* 2026-04-07 02:36:13.911379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.911397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.911444 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.911463 | orchestrator | 2026-04-07 02:36:13.911480 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-07 02:36:13.911498 | orchestrator | Tuesday 07 April 2026 02:36:10 +0000 (0:00:00.181) 0:01:18.530 ********* 2026-04-07 02:36:13.911516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.911535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.911555 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.911574 | orchestrator | 2026-04-07 02:36:13.911594 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-07 02:36:13.911614 | orchestrator | Tuesday 07 April 2026 02:36:11 +0000 (0:00:00.162) 0:01:18.693 ********* 2026-04-07 02:36:13.911633 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.911652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.911671 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.911697 | orchestrator | 2026-04-07 02:36:13.911718 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-07 02:36:13.911736 | orchestrator | Tuesday 07 April 2026 02:36:11 +0000 (0:00:00.158) 0:01:18.851 ********* 2026-04-07 02:36:13.911754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.911771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.911787 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.911803 | orchestrator | 2026-04-07 02:36:13.911819 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-07 02:36:13.911838 | orchestrator | Tuesday 07 April 2026 02:36:11 +0000 (0:00:00.175) 0:01:19.027 ********* 2026-04-07 02:36:13.911857 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:13.911875 | orchestrator | 2026-04-07 02:36:13.911892 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-07 02:36:13.911909 | orchestrator | Tuesday 07 April 2026 02:36:12 +0000 (0:00:00.778) 0:01:19.805 ********* 2026-04-07 02:36:13.911925 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:13.911942 | orchestrator | 2026-04-07 02:36:13.911960 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-07 02:36:13.911979 | orchestrator | Tuesday 07 April 2026 02:36:12 +0000 (0:00:00.540) 0:01:20.346 ********* 2026-04-07 02:36:13.911998 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:13.912018 | orchestrator | 2026-04-07 02:36:13.912036 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-07 02:36:13.912054 | orchestrator | Tuesday 07 April 2026 02:36:12 +0000 (0:00:00.165) 0:01:20.511 ********* 2026-04-07 02:36:13.912093 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'vg_name': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}) 2026-04-07 02:36:13.912114 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'vg_name': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}) 2026-04-07 02:36:13.912132 | orchestrator | 2026-04-07 02:36:13.912150 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-07 02:36:13.912166 | orchestrator | Tuesday 07 April 2026 02:36:13 +0000 (0:00:00.198) 0:01:20.710 ********* 2026-04-07 02:36:13.912201 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.912224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.912235 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.912247 | orchestrator | 2026-04-07 02:36:13.912258 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-07 02:36:13.912268 | orchestrator | Tuesday 07 April 2026 02:36:13 +0000 (0:00:00.195) 0:01:20.906 ********* 2026-04-07 02:36:13.912279 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.912290 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.912301 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.912311 | orchestrator | 2026-04-07 02:36:13.912322 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-07 02:36:13.912333 | orchestrator | Tuesday 07 April 2026 02:36:13 +0000 (0:00:00.185) 0:01:21.092 ********* 2026-04-07 02:36:13.912344 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 02:36:13.912355 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 02:36:13.912365 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:13.912376 | orchestrator | 2026-04-07 02:36:13.912387 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-07 02:36:13.912398 | orchestrator | Tuesday 07 April 2026 02:36:13 +0000 (0:00:00.148) 0:01:21.240 ********* 2026-04-07 02:36:13.912494 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 02:36:13.912514 | orchestrator |  "lvm_report": { 2026-04-07 02:36:13.912526 | orchestrator |  "lv": [ 2026-04-07 02:36:13.912538 | orchestrator |  { 2026-04-07 02:36:13.912549 | orchestrator |  "lv_name": "osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2", 2026-04-07 02:36:13.912561 | orchestrator |  "vg_name": "ceph-754aebfc-d76c-537f-941d-8ad36483cdb2" 2026-04-07 02:36:13.912572 | orchestrator |  }, 2026-04-07 02:36:13.912583 | orchestrator |  { 2026-04-07 02:36:13.912594 | orchestrator |  "lv_name": "osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d", 2026-04-07 02:36:13.912605 | orchestrator |  "vg_name": "ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d" 2026-04-07 02:36:13.912616 | orchestrator |  } 2026-04-07 02:36:13.912629 | orchestrator |  ], 2026-04-07 02:36:13.912647 | orchestrator |  "pv": [ 2026-04-07 02:36:13.912665 | orchestrator |  { 2026-04-07 02:36:13.912684 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-07 02:36:13.912702 | orchestrator |  "vg_name": "ceph-754aebfc-d76c-537f-941d-8ad36483cdb2" 2026-04-07 02:36:13.912720 | orchestrator |  }, 2026-04-07 02:36:13.912736 | orchestrator |  { 2026-04-07 02:36:13.912753 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-07 02:36:13.912791 | orchestrator |  "vg_name": "ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d" 2026-04-07 02:36:13.912810 | orchestrator |  } 2026-04-07 02:36:13.912828 | orchestrator |  ] 2026-04-07 02:36:13.912846 | orchestrator |  } 2026-04-07 02:36:13.912864 | orchestrator | } 2026-04-07 02:36:13.912883 | orchestrator | 2026-04-07 02:36:13.912900 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:36:13.912919 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-07 02:36:13.912936 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-07 02:36:13.912955 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-07 02:36:13.912973 | orchestrator | 2026-04-07 02:36:13.912992 | orchestrator | 2026-04-07 02:36:13.913011 | orchestrator | 2026-04-07 02:36:13.913029 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:36:13.913047 | orchestrator | Tuesday 07 April 2026 02:36:13 +0000 (0:00:00.172) 0:01:21.413 ********* 2026-04-07 02:36:13.913066 | orchestrator | =============================================================================== 2026-04-07 02:36:13.913084 | orchestrator | Create block VGs -------------------------------------------------------- 6.00s 2026-04-07 02:36:13.913102 | orchestrator | Create block LVs -------------------------------------------------------- 4.30s 2026-04-07 02:36:13.913119 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.90s 2026-04-07 02:36:13.913135 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2026-04-07 02:36:13.913154 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.66s 2026-04-07 02:36:13.913172 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2026-04-07 02:36:13.913189 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.61s 2026-04-07 02:36:13.913208 | orchestrator | Add known links to the list of available block devices ------------------ 1.54s 2026-04-07 02:36:13.913246 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2026-04-07 02:36:14.359122 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.39s 2026-04-07 02:36:14.359231 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2026-04-07 02:36:14.359252 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-04-07 02:36:14.359293 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.88s 2026-04-07 02:36:14.359310 | orchestrator | Print LVM report data --------------------------------------------------- 0.88s 2026-04-07 02:36:14.359320 | orchestrator | Calculate size needed for LVs on ceph_db_devices ------------------------ 0.81s 2026-04-07 02:36:14.359328 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-04-07 02:36:14.359337 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.80s 2026-04-07 02:36:14.359359 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-04-07 02:36:14.359367 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-04-07 02:36:14.359376 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.78s 2026-04-07 02:36:27.098702 | orchestrator | 2026-04-07 02:36:27 | INFO  | Task de0c849d-f3df-4775-9c46-35188b4b21da (facts) was prepared for execution. 2026-04-07 02:36:27.098781 | orchestrator | 2026-04-07 02:36:27 | INFO  | It takes a moment until task de0c849d-f3df-4775-9c46-35188b4b21da (facts) has been started and output is visible here. 2026-04-07 02:36:41.997520 | orchestrator | 2026-04-07 02:36:41.997666 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-07 02:36:41.997728 | orchestrator | 2026-04-07 02:36:41.997742 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-07 02:36:41.997753 | orchestrator | Tuesday 07 April 2026 02:36:31 +0000 (0:00:00.323) 0:00:00.323 ********* 2026-04-07 02:36:41.997765 | orchestrator | ok: [testbed-manager] 2026-04-07 02:36:41.997777 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:36:41.997788 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:36:41.997799 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:36:41.997811 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:36:41.997829 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:36:41.997847 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:41.997863 | orchestrator | 2026-04-07 02:36:41.997895 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-07 02:36:41.997914 | orchestrator | Tuesday 07 April 2026 02:36:33 +0000 (0:00:01.292) 0:00:01.616 ********* 2026-04-07 02:36:41.997931 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:36:41.997950 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:36:41.997969 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:36:41.997987 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:36:41.998007 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:36:41.998111 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:36:41.998135 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:41.998154 | orchestrator | 2026-04-07 02:36:41.998172 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 02:36:41.998190 | orchestrator | 2026-04-07 02:36:41.998279 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 02:36:41.998302 | orchestrator | Tuesday 07 April 2026 02:36:34 +0000 (0:00:01.456) 0:00:03.073 ********* 2026-04-07 02:36:41.998323 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:36:41.998336 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:36:41.998347 | orchestrator | ok: [testbed-manager] 2026-04-07 02:36:41.998357 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:36:41.998368 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:36:41.998379 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:36:41.998390 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:36:41.998401 | orchestrator | 2026-04-07 02:36:41.998565 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-07 02:36:41.998592 | orchestrator | 2026-04-07 02:36:41.998609 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-07 02:36:41.998625 | orchestrator | Tuesday 07 April 2026 02:36:40 +0000 (0:00:06.223) 0:00:09.296 ********* 2026-04-07 02:36:41.998643 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:36:41.998661 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:36:41.998678 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:36:41.998696 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:36:41.998714 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:36:41.998732 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:36:41.998750 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:36:41.998768 | orchestrator | 2026-04-07 02:36:41.998786 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:36:41.998805 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:36:41.998825 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:36:41.998844 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:36:41.998863 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:36:41.998883 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:36:41.998924 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:36:41.998942 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:36:41.998953 | orchestrator | 2026-04-07 02:36:41.998964 | orchestrator | 2026-04-07 02:36:41.998975 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:36:41.999001 | orchestrator | Tuesday 07 April 2026 02:36:41 +0000 (0:00:00.625) 0:00:09.922 ********* 2026-04-07 02:36:41.999019 | orchestrator | =============================================================================== 2026-04-07 02:36:41.999040 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.22s 2026-04-07 02:36:41.999060 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2026-04-07 02:36:41.999076 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-04-07 02:36:41.999091 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-04-07 02:36:44.766388 | orchestrator | 2026-04-07 02:36:44 | INFO  | Task 72536a55-96d8-49c4-ae8a-16fe4be3352d (ceph) was prepared for execution. 2026-04-07 02:36:44.766521 | orchestrator | 2026-04-07 02:36:44 | INFO  | It takes a moment until task 72536a55-96d8-49c4-ae8a-16fe4be3352d (ceph) has been started and output is visible here. 2026-04-07 02:37:04.237602 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 02:37:04.237705 | orchestrator | 2.16.14 2026-04-07 02:37:04.237719 | orchestrator | 2026-04-07 02:37:04.237728 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-07 02:37:04.237737 | orchestrator | 2026-04-07 02:37:04.237746 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-07 02:37:04.237754 | orchestrator | Tuesday 07 April 2026 02:36:50 +0000 (0:00:00.937) 0:00:00.937 ********* 2026-04-07 02:37:04.237765 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:37:04.237774 | orchestrator | 2026-04-07 02:37:04.237782 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-07 02:37:04.237790 | orchestrator | Tuesday 07 April 2026 02:36:51 +0000 (0:00:01.300) 0:00:02.237 ********* 2026-04-07 02:37:04.237798 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:04.237806 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:04.237814 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:04.237821 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:04.237829 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:04.237836 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:04.237845 | orchestrator | 2026-04-07 02:37:04.237853 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-07 02:37:04.237862 | orchestrator | Tuesday 07 April 2026 02:36:52 +0000 (0:00:01.293) 0:00:03.531 ********* 2026-04-07 02:37:04.237869 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:04.237877 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:04.237884 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:04.237892 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:04.237899 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:04.237907 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:04.237914 | orchestrator | 2026-04-07 02:37:04.237922 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 02:37:04.237930 | orchestrator | Tuesday 07 April 2026 02:36:53 +0000 (0:00:00.842) 0:00:04.373 ********* 2026-04-07 02:37:04.237938 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:04.237947 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:04.237955 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:04.237962 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:04.237991 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:04.238000 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:04.238008 | orchestrator | 2026-04-07 02:37:04.238062 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 02:37:04.238071 | orchestrator | Tuesday 07 April 2026 02:36:54 +0000 (0:00:00.989) 0:00:05.362 ********* 2026-04-07 02:37:04.238079 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:04.238088 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:04.238097 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:04.238106 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:04.238115 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:04.238123 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:04.238132 | orchestrator | 2026-04-07 02:37:04.238141 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-07 02:37:04.238149 | orchestrator | Tuesday 07 April 2026 02:36:55 +0000 (0:00:00.868) 0:00:06.231 ********* 2026-04-07 02:37:04.238157 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:04.238166 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:04.238174 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:04.238183 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:04.238192 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:04.238203 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:04.238212 | orchestrator | 2026-04-07 02:37:04.238222 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-07 02:37:04.238234 | orchestrator | Tuesday 07 April 2026 02:36:56 +0000 (0:00:00.660) 0:00:06.892 ********* 2026-04-07 02:37:04.238244 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:04.238254 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:04.238263 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:04.238272 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:04.238282 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:04.238291 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:04.238301 | orchestrator | 2026-04-07 02:37:04.238310 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-07 02:37:04.238320 | orchestrator | Tuesday 07 April 2026 02:36:57 +0000 (0:00:00.881) 0:00:07.773 ********* 2026-04-07 02:37:04.238330 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:04.238338 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:04.238348 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:04.238356 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:04.238365 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:04.238373 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:04.238381 | orchestrator | 2026-04-07 02:37:04.238390 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-07 02:37:04.238399 | orchestrator | Tuesday 07 April 2026 02:36:57 +0000 (0:00:00.656) 0:00:08.429 ********* 2026-04-07 02:37:04.238408 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:04.238431 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:04.238439 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:04.238447 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:04.238470 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:04.238478 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:04.238485 | orchestrator | 2026-04-07 02:37:04.238493 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-07 02:37:04.238500 | orchestrator | Tuesday 07 April 2026 02:36:58 +0000 (0:00:00.865) 0:00:09.295 ********* 2026-04-07 02:37:04.238508 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:37:04.238515 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:37:04.238523 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:37:04.238531 | orchestrator | 2026-04-07 02:37:04.238538 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-07 02:37:04.238545 | orchestrator | Tuesday 07 April 2026 02:36:59 +0000 (0:00:00.676) 0:00:09.971 ********* 2026-04-07 02:37:04.238560 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:04.238568 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:04.238575 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:04.238598 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:04.238607 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:04.238616 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:04.238624 | orchestrator | 2026-04-07 02:37:04.238633 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-07 02:37:04.238640 | orchestrator | Tuesday 07 April 2026 02:37:00 +0000 (0:00:00.807) 0:00:10.779 ********* 2026-04-07 02:37:04.238649 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:37:04.238657 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:37:04.238666 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:37:04.238675 | orchestrator | 2026-04-07 02:37:04.238684 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-07 02:37:04.238693 | orchestrator | Tuesday 07 April 2026 02:37:02 +0000 (0:00:02.511) 0:00:13.291 ********* 2026-04-07 02:37:04.238702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 02:37:04.238710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 02:37:04.238717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 02:37:04.238725 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:04.238732 | orchestrator | 2026-04-07 02:37:04.238740 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-07 02:37:04.238748 | orchestrator | Tuesday 07 April 2026 02:37:03 +0000 (0:00:00.474) 0:00:13.765 ********* 2026-04-07 02:37:04.238758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 02:37:04.238768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 02:37:04.238776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 02:37:04.238784 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:04.238792 | orchestrator | 2026-04-07 02:37:04.238799 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-07 02:37:04.238806 | orchestrator | Tuesday 07 April 2026 02:37:03 +0000 (0:00:00.630) 0:00:14.395 ********* 2026-04-07 02:37:04.238816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:04.238827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:04.238835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:04.238848 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:04.238856 | orchestrator | 2026-04-07 02:37:04.238868 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-07 02:37:04.238876 | orchestrator | Tuesday 07 April 2026 02:37:04 +0000 (0:00:00.195) 0:00:14.591 ********* 2026-04-07 02:37:04.238891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-07 02:37:01.181319', 'end': '2026-04-07 02:37:01.225155', 'delta': '0:00:00.043836', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 02:37:15.278906 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-07 02:37:01.727618', 'end': '2026-04-07 02:37:01.776484', 'delta': '0:00:00.048866', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 02:37:15.278989 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-07 02:37:02.320514', 'end': '2026-04-07 02:37:02.365984', 'delta': '0:00:00.045470', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 02:37:15.278998 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279006 | orchestrator | 2026-04-07 02:37:15.279012 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-07 02:37:15.279019 | orchestrator | Tuesday 07 April 2026 02:37:04 +0000 (0:00:00.177) 0:00:14.769 ********* 2026-04-07 02:37:15.279024 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:15.279030 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:15.279035 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:15.279040 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:15.279045 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:15.279050 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:15.279056 | orchestrator | 2026-04-07 02:37:15.279061 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-07 02:37:15.279066 | orchestrator | Tuesday 07 April 2026 02:37:05 +0000 (0:00:00.846) 0:00:15.615 ********* 2026-04-07 02:37:15.279071 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:37:15.279077 | orchestrator | 2026-04-07 02:37:15.279082 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-07 02:37:15.279087 | orchestrator | Tuesday 07 April 2026 02:37:06 +0000 (0:00:00.966) 0:00:16.582 ********* 2026-04-07 02:37:15.279105 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279110 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279116 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279121 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279126 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279131 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279136 | orchestrator | 2026-04-07 02:37:15.279141 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-07 02:37:15.279146 | orchestrator | Tuesday 07 April 2026 02:37:06 +0000 (0:00:00.944) 0:00:17.526 ********* 2026-04-07 02:37:15.279152 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279157 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279162 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279167 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279172 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279177 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279182 | orchestrator | 2026-04-07 02:37:15.279187 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 02:37:15.279193 | orchestrator | Tuesday 07 April 2026 02:37:08 +0000 (0:00:01.359) 0:00:18.885 ********* 2026-04-07 02:37:15.279198 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279203 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279208 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279213 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279218 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279228 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279234 | orchestrator | 2026-04-07 02:37:15.279239 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-07 02:37:15.279244 | orchestrator | Tuesday 07 April 2026 02:37:09 +0000 (0:00:00.697) 0:00:19.582 ********* 2026-04-07 02:37:15.279249 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279254 | orchestrator | 2026-04-07 02:37:15.279259 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-07 02:37:15.279264 | orchestrator | Tuesday 07 April 2026 02:37:09 +0000 (0:00:00.136) 0:00:19.719 ********* 2026-04-07 02:37:15.279270 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279275 | orchestrator | 2026-04-07 02:37:15.279280 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 02:37:15.279285 | orchestrator | Tuesday 07 April 2026 02:37:09 +0000 (0:00:00.244) 0:00:19.963 ********* 2026-04-07 02:37:15.279307 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279312 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279324 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279329 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279334 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279340 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279345 | orchestrator | 2026-04-07 02:37:15.279360 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-07 02:37:15.279366 | orchestrator | Tuesday 07 April 2026 02:37:10 +0000 (0:00:00.888) 0:00:20.852 ********* 2026-04-07 02:37:15.279371 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279376 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279381 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279386 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279391 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279396 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279402 | orchestrator | 2026-04-07 02:37:15.279407 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-07 02:37:15.279412 | orchestrator | Tuesday 07 April 2026 02:37:11 +0000 (0:00:00.712) 0:00:21.565 ********* 2026-04-07 02:37:15.279436 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279442 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279447 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279457 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279462 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279467 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279473 | orchestrator | 2026-04-07 02:37:15.279479 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-07 02:37:15.279485 | orchestrator | Tuesday 07 April 2026 02:37:11 +0000 (0:00:00.924) 0:00:22.489 ********* 2026-04-07 02:37:15.279491 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279496 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279502 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279508 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279514 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279520 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279526 | orchestrator | 2026-04-07 02:37:15.279532 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-07 02:37:15.279538 | orchestrator | Tuesday 07 April 2026 02:37:12 +0000 (0:00:00.684) 0:00:23.174 ********* 2026-04-07 02:37:15.279543 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279549 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279555 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279561 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279566 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279572 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279578 | orchestrator | 2026-04-07 02:37:15.279584 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-07 02:37:15.279590 | orchestrator | Tuesday 07 April 2026 02:37:13 +0000 (0:00:00.936) 0:00:24.110 ********* 2026-04-07 02:37:15.279597 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279602 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279608 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279614 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279619 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279624 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279629 | orchestrator | 2026-04-07 02:37:15.279635 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-07 02:37:15.279641 | orchestrator | Tuesday 07 April 2026 02:37:14 +0000 (0:00:00.691) 0:00:24.801 ********* 2026-04-07 02:37:15.279646 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.279651 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.279663 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:15.279668 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:15.279673 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:15.279678 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:15.279683 | orchestrator | 2026-04-07 02:37:15.279689 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-07 02:37:15.279694 | orchestrator | Tuesday 07 April 2026 02:37:15 +0000 (0:00:00.885) 0:00:25.686 ********* 2026-04-07 02:37:15.279700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a', 'dm-uuid-LVM-Iy8rcFTCo5W5yRGOTreEEQjp17ko3Q41z5GT9DF2n3y0jUXUATRgcvWUva5Hkl5i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.279709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a', 'dm-uuid-LVM-bglYLCxgkD3Qei681bqPmMF5XF5Cd1MSWl8BDXhbFTKiwBIAb3oEgAczEGV9LXaZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.279723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.405199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f', 'dm-uuid-LVM-AwooBDvX7rFetLSgq1Ce0QV9OX4RcM369HiqdQcuvs1yzXwuZ5Vmxo8NwSkEMSV8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KbQcdi-US6m-bhDi-eJCV-lYyz-1b3q-6dXcPl', 'scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc', 'scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.405214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09', 'dm-uuid-LVM-bMsdwvKXiGbLYxQ2sqen2wd8SFVCxkJLQE7kiiwsLEGhL2FNSj6gPgLd2pZMGUoL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.405224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kNGUrC-NTT1-tndE-pJPs-WGt9-udV7-3Eh5Id', 'scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539', 'scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.405240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.702793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc', 'scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.702897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.702917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.702930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.702942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.702954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.703003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.703015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.703046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.703063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.703079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uFQjDD-6Vwu-b0Df-kkau-8GoO-290Z-GefUFg', 'scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c', 'scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.703101 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:15.703119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C8sbvR-d1U1-401x-XxcV-6mPF-9ypK-VoR24u', 'scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f', 'scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.703139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc', 'scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.845464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.845565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2', 'dm-uuid-LVM-T2EndjdOS29FjzC5jDtGOSk25DBRWo663ZaUiQtM2GjT3MT0SRy5sJS0UQzmoSs8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d', 'dm-uuid-LVM-70S3mOSclp5fTNOIhfFxohdLg5UX463GstIgbONbBmukx2iBeuHV5bIO1Eujm1WX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:15.845798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.845830 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:15.845850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UDOgFQ-qFyi-gVi2-LQBC-OZQf-u9TS-0kON4x', 'scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7', 'scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:15.845880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ISxGDZ-smz1-74tU-v9PH-Tqzx-sLKc-qKqsod', 'scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d', 'scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.075382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599', 'scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.075624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.075690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.075715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.075760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.075787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.075808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.075828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.075874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.075894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.075931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.075969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.075994 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:16.076024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.076048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.076079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.314394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.314410 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:16.314472 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:16.314488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.314605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:37:16.543039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.543129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:37:16.543142 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:16.543153 | orchestrator | 2026-04-07 02:37:16.543162 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-07 02:37:16.543172 | orchestrator | Tuesday 07 April 2026 02:37:16 +0000 (0:00:01.162) 0:00:26.849 ********* 2026-04-07 02:37:16.543181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a', 'dm-uuid-LVM-Iy8rcFTCo5W5yRGOTreEEQjp17ko3Q41z5GT9DF2n3y0jUXUATRgcvWUva5Hkl5i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.543223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a', 'dm-uuid-LVM-bglYLCxgkD3Qei681bqPmMF5XF5Cd1MSWl8BDXhbFTKiwBIAb3oEgAczEGV9LXaZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.543233 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.543243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.543256 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.543265 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.543273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.543294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.543308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f', 'dm-uuid-LVM-AwooBDvX7rFetLSgq1Ce0QV9OX4RcM369HiqdQcuvs1yzXwuZ5Vmxo8NwSkEMSV8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885495 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09', 'dm-uuid-LVM-bMsdwvKXiGbLYxQ2sqen2wd8SFVCxkJLQE7kiiwsLEGhL2FNSj6gPgLd2pZMGUoL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885538 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KbQcdi-US6m-bhDi-eJCV-lYyz-1b3q-6dXcPl', 'scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc', 'scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kNGUrC-NTT1-tndE-pJPs-WGt9-udV7-3Eh5Id', 'scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539', 'scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.885569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc', 'scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921273 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921534 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921578 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921626 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uFQjDD-6Vwu-b0Df-kkau-8GoO-290Z-GefUFg', 'scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c', 'scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:16.921648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C8sbvR-d1U1-401x-XxcV-6mPF-9ypK-VoR24u', 'scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f', 'scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc', 'scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173329 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173382 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:17.173403 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2', 'dm-uuid-LVM-T2EndjdOS29FjzC5jDtGOSk25DBRWo663ZaUiQtM2GjT3MT0SRy5sJS0UQzmoSs8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173493 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d', 'dm-uuid-LVM-70S3mOSclp5fTNOIhfFxohdLg5UX463GstIgbONbBmukx2iBeuHV5bIO1Eujm1WX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173514 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173582 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:17.173610 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173676 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173712 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.173750 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235325 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235532 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235556 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235586 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235600 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UDOgFQ-qFyi-gVi2-LQBC-OZQf-u9TS-0kON4x', 'scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7', 'scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235620 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235632 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ISxGDZ-smz1-74tU-v9PH-Tqzx-sLKc-qKqsod', 'scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d', 'scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235684 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235697 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.235723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599', 'scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.441980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442270 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442319 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:17.442340 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442358 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442377 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442395 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442413 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442488 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.442539 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.680986 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681100 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681168 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681181 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:17.681191 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:17.681215 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681224 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681233 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681241 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681249 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681268 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681277 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:17.681292 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:25.453900 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:25.454155 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:37:25.454193 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:25.454215 | orchestrator | 2026-04-07 02:37:25.454236 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-07 02:37:25.454257 | orchestrator | Tuesday 07 April 2026 02:37:17 +0000 (0:00:01.366) 0:00:28.215 ********* 2026-04-07 02:37:25.454275 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:25.454294 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:25.454312 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:25.454330 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:25.454347 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:25.454366 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:25.454384 | orchestrator | 2026-04-07 02:37:25.454404 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-07 02:37:25.454422 | orchestrator | Tuesday 07 April 2026 02:37:18 +0000 (0:00:01.024) 0:00:29.240 ********* 2026-04-07 02:37:25.454469 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:25.454489 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:25.454508 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:25.454526 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:25.454544 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:25.454556 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:25.454569 | orchestrator | 2026-04-07 02:37:25.454582 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 02:37:25.454594 | orchestrator | Tuesday 07 April 2026 02:37:19 +0000 (0:00:00.914) 0:00:30.154 ********* 2026-04-07 02:37:25.454606 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:25.454619 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:25.454630 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:25.454661 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:25.454673 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:25.454684 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:25.454695 | orchestrator | 2026-04-07 02:37:25.454706 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 02:37:25.454718 | orchestrator | Tuesday 07 April 2026 02:37:20 +0000 (0:00:00.637) 0:00:30.791 ********* 2026-04-07 02:37:25.454729 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:25.454740 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:25.454750 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:25.454761 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:25.454772 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:25.454782 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:25.454793 | orchestrator | 2026-04-07 02:37:25.454803 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 02:37:25.454814 | orchestrator | Tuesday 07 April 2026 02:37:21 +0000 (0:00:00.858) 0:00:31.650 ********* 2026-04-07 02:37:25.454825 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:25.454835 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:25.454846 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:25.454869 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:25.454879 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:25.454890 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:25.454900 | orchestrator | 2026-04-07 02:37:25.454911 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 02:37:25.454922 | orchestrator | Tuesday 07 April 2026 02:37:21 +0000 (0:00:00.667) 0:00:32.317 ********* 2026-04-07 02:37:25.454932 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:25.454943 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:25.454953 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:25.454964 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:25.454974 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:25.454985 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:25.454996 | orchestrator | 2026-04-07 02:37:25.455006 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 02:37:25.455017 | orchestrator | Tuesday 07 April 2026 02:37:22 +0000 (0:00:00.919) 0:00:33.237 ********* 2026-04-07 02:37:25.455028 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-07 02:37:25.455039 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-07 02:37:25.455049 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-07 02:37:25.455060 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-07 02:37:25.455070 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-07 02:37:25.455081 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-07 02:37:25.455092 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-07 02:37:25.455102 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 02:37:25.455113 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-07 02:37:25.455124 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-07 02:37:25.455134 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-07 02:37:25.455145 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 02:37:25.455155 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-07 02:37:25.455166 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 02:37:25.455176 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-07 02:37:25.455187 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-07 02:37:25.455198 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-07 02:37:25.455215 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-07 02:37:25.455227 | orchestrator | 2026-04-07 02:37:25.455237 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 02:37:25.455248 | orchestrator | Tuesday 07 April 2026 02:37:24 +0000 (0:00:01.728) 0:00:34.966 ********* 2026-04-07 02:37:25.455259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 02:37:25.455270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 02:37:25.455281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 02:37:25.455292 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:25.455303 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-07 02:37:25.455313 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-07 02:37:25.455324 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-07 02:37:25.455335 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:25.455345 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-07 02:37:25.455356 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-07 02:37:25.455367 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-07 02:37:25.455377 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:25.455388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 02:37:25.455399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 02:37:25.455416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 02:37:25.455427 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:25.455481 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-07 02:37:25.455501 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-07 02:37:25.455520 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-07 02:37:25.455539 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:25.455553 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-07 02:37:25.455564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-07 02:37:25.455574 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-07 02:37:25.455585 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:25.455596 | orchestrator | 2026-04-07 02:37:25.455607 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-07 02:37:25.455626 | orchestrator | Tuesday 07 April 2026 02:37:25 +0000 (0:00:01.021) 0:00:35.987 ********* 2026-04-07 02:37:44.561801 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:44.561920 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:44.561933 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:44.561942 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:37:44.561951 | orchestrator | 2026-04-07 02:37:44.561959 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 02:37:44.561970 | orchestrator | Tuesday 07 April 2026 02:37:26 +0000 (0:00:01.148) 0:00:37.136 ********* 2026-04-07 02:37:44.561978 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.561986 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:44.561994 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:44.562001 | orchestrator | 2026-04-07 02:37:44.562008 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 02:37:44.562063 | orchestrator | Tuesday 07 April 2026 02:37:26 +0000 (0:00:00.369) 0:00:37.506 ********* 2026-04-07 02:37:44.562072 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.562080 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:44.562087 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:44.562099 | orchestrator | 2026-04-07 02:37:44.562110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 02:37:44.562155 | orchestrator | Tuesday 07 April 2026 02:37:27 +0000 (0:00:00.378) 0:00:37.884 ********* 2026-04-07 02:37:44.562168 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.562181 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:44.562193 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:44.562205 | orchestrator | 2026-04-07 02:37:44.562219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 02:37:44.562227 | orchestrator | Tuesday 07 April 2026 02:37:27 +0000 (0:00:00.565) 0:00:38.449 ********* 2026-04-07 02:37:44.562234 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:44.562243 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:44.562250 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:44.562257 | orchestrator | 2026-04-07 02:37:44.562265 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 02:37:44.562274 | orchestrator | Tuesday 07 April 2026 02:37:28 +0000 (0:00:00.461) 0:00:38.911 ********* 2026-04-07 02:37:44.562287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:37:44.562305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:37:44.562318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:37:44.562330 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.562342 | orchestrator | 2026-04-07 02:37:44.562355 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 02:37:44.562393 | orchestrator | Tuesday 07 April 2026 02:37:28 +0000 (0:00:00.441) 0:00:39.352 ********* 2026-04-07 02:37:44.562406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:37:44.562418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:37:44.562431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:37:44.562442 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.562453 | orchestrator | 2026-04-07 02:37:44.562488 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 02:37:44.562501 | orchestrator | Tuesday 07 April 2026 02:37:29 +0000 (0:00:00.421) 0:00:39.773 ********* 2026-04-07 02:37:44.562530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:37:44.562544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:37:44.562557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:37:44.562571 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.562584 | orchestrator | 2026-04-07 02:37:44.562596 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 02:37:44.562607 | orchestrator | Tuesday 07 April 2026 02:37:29 +0000 (0:00:00.422) 0:00:40.196 ********* 2026-04-07 02:37:44.562615 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:44.562623 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:44.562632 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:44.562640 | orchestrator | 2026-04-07 02:37:44.562649 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 02:37:44.562658 | orchestrator | Tuesday 07 April 2026 02:37:30 +0000 (0:00:00.397) 0:00:40.593 ********* 2026-04-07 02:37:44.562667 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 02:37:44.562675 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-07 02:37:44.562684 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-07 02:37:44.562693 | orchestrator | 2026-04-07 02:37:44.562700 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-07 02:37:44.562707 | orchestrator | Tuesday 07 April 2026 02:37:31 +0000 (0:00:01.083) 0:00:41.677 ********* 2026-04-07 02:37:44.562715 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:37:44.562723 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:37:44.562730 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:37:44.562737 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-07 02:37:44.562744 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 02:37:44.562752 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 02:37:44.562759 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 02:37:44.562766 | orchestrator | 2026-04-07 02:37:44.562773 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-07 02:37:44.562781 | orchestrator | Tuesday 07 April 2026 02:37:32 +0000 (0:00:00.884) 0:00:42.561 ********* 2026-04-07 02:37:44.562805 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:37:44.562812 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:37:44.562820 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:37:44.562827 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-07 02:37:44.562834 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 02:37:44.562841 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 02:37:44.562848 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 02:37:44.562856 | orchestrator | 2026-04-07 02:37:44.562863 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 02:37:44.562877 | orchestrator | Tuesday 07 April 2026 02:37:34 +0000 (0:00:02.061) 0:00:44.623 ********* 2026-04-07 02:37:44.562885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:37:44.562894 | orchestrator | 2026-04-07 02:37:44.562902 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 02:37:44.562909 | orchestrator | Tuesday 07 April 2026 02:37:35 +0000 (0:00:01.332) 0:00:45.956 ********* 2026-04-07 02:37:44.562916 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:37:44.562923 | orchestrator | 2026-04-07 02:37:44.562930 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 02:37:44.562937 | orchestrator | Tuesday 07 April 2026 02:37:36 +0000 (0:00:01.331) 0:00:47.287 ********* 2026-04-07 02:37:44.562945 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.562953 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:44.562965 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:44.562983 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:44.562997 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:44.563008 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:44.563019 | orchestrator | 2026-04-07 02:37:44.563031 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 02:37:44.563042 | orchestrator | Tuesday 07 April 2026 02:37:38 +0000 (0:00:01.311) 0:00:48.599 ********* 2026-04-07 02:37:44.563053 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:44.563064 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:44.563076 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:44.563088 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:44.563100 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:44.563112 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:44.563125 | orchestrator | 2026-04-07 02:37:44.563137 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 02:37:44.563150 | orchestrator | Tuesday 07 April 2026 02:37:38 +0000 (0:00:00.749) 0:00:49.348 ********* 2026-04-07 02:37:44.563162 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:44.563176 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:44.563188 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:44.563199 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:44.563210 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:44.563229 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:44.563242 | orchestrator | 2026-04-07 02:37:44.563254 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 02:37:44.563265 | orchestrator | Tuesday 07 April 2026 02:37:39 +0000 (0:00:00.916) 0:00:50.265 ********* 2026-04-07 02:37:44.563278 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:44.563290 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:44.563301 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:44.563313 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:37:44.563325 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:44.563337 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:37:44.563349 | orchestrator | 2026-04-07 02:37:44.563361 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 02:37:44.563373 | orchestrator | Tuesday 07 April 2026 02:37:40 +0000 (0:00:00.785) 0:00:51.051 ********* 2026-04-07 02:37:44.563386 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.563398 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:44.563411 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:44.563428 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:37:44.563442 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:37:44.563454 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:37:44.563496 | orchestrator | 2026-04-07 02:37:44.563511 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 02:37:44.563534 | orchestrator | Tuesday 07 April 2026 02:37:41 +0000 (0:00:01.376) 0:00:52.428 ********* 2026-04-07 02:37:44.563548 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.563562 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:44.563576 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:44.563590 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:44.563603 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:44.563614 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:44.563621 | orchestrator | 2026-04-07 02:37:44.563628 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 02:37:44.563636 | orchestrator | Tuesday 07 April 2026 02:37:42 +0000 (0:00:00.651) 0:00:53.079 ********* 2026-04-07 02:37:44.563643 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:37:44.563650 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:37:44.563657 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:37:44.563664 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:37:44.563671 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:37:44.563678 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:37:44.563685 | orchestrator | 2026-04-07 02:37:44.563693 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 02:37:44.563700 | orchestrator | Tuesday 07 April 2026 02:37:43 +0000 (0:00:00.921) 0:00:54.001 ********* 2026-04-07 02:37:44.563707 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:37:44.563725 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:38:05.098716 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:38:05.098797 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:38:05.098804 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:38:05.098809 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:38:05.098814 | orchestrator | 2026-04-07 02:38:05.098819 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 02:38:05.098825 | orchestrator | Tuesday 07 April 2026 02:37:44 +0000 (0:00:01.092) 0:00:55.093 ********* 2026-04-07 02:38:05.098829 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:38:05.098833 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:38:05.098837 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:38:05.098841 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:38:05.098844 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:38:05.098848 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:38:05.098852 | orchestrator | 2026-04-07 02:38:05.098856 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 02:38:05.098860 | orchestrator | Tuesday 07 April 2026 02:37:45 +0000 (0:00:01.377) 0:00:56.471 ********* 2026-04-07 02:38:05.098864 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:38:05.098869 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:38:05.098875 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:38:05.098882 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.098888 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.098894 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.098900 | orchestrator | 2026-04-07 02:38:05.098906 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 02:38:05.098912 | orchestrator | Tuesday 07 April 2026 02:37:46 +0000 (0:00:00.720) 0:00:57.192 ********* 2026-04-07 02:38:05.098918 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:38:05.098922 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:38:05.098925 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:38:05.098929 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:38:05.098933 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:38:05.098937 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:38:05.098941 | orchestrator | 2026-04-07 02:38:05.098945 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 02:38:05.098951 | orchestrator | Tuesday 07 April 2026 02:37:47 +0000 (0:00:00.976) 0:00:58.169 ********* 2026-04-07 02:38:05.098958 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:38:05.098982 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:38:05.098988 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:38:05.098992 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.098996 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.099000 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.099003 | orchestrator | 2026-04-07 02:38:05.099007 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 02:38:05.099011 | orchestrator | Tuesday 07 April 2026 02:37:48 +0000 (0:00:00.679) 0:00:58.848 ********* 2026-04-07 02:38:05.099015 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:38:05.099018 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:38:05.099022 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:38:05.099026 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.099030 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.099034 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.099037 | orchestrator | 2026-04-07 02:38:05.099041 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 02:38:05.099045 | orchestrator | Tuesday 07 April 2026 02:37:49 +0000 (0:00:00.919) 0:00:59.767 ********* 2026-04-07 02:38:05.099049 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:38:05.099053 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:38:05.099056 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:38:05.099060 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.099064 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.099079 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.099083 | orchestrator | 2026-04-07 02:38:05.099086 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 02:38:05.099090 | orchestrator | Tuesday 07 April 2026 02:37:49 +0000 (0:00:00.680) 0:01:00.448 ********* 2026-04-07 02:38:05.099094 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:38:05.099098 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:38:05.099102 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:38:05.099105 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.099109 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.099113 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.099117 | orchestrator | 2026-04-07 02:38:05.099121 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 02:38:05.099124 | orchestrator | Tuesday 07 April 2026 02:37:50 +0000 (0:00:00.911) 0:01:01.359 ********* 2026-04-07 02:38:05.099128 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:38:05.099132 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:38:05.099136 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:38:05.099140 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.099144 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.099147 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.099151 | orchestrator | 2026-04-07 02:38:05.099155 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 02:38:05.099159 | orchestrator | Tuesday 07 April 2026 02:37:51 +0000 (0:00:00.674) 0:01:02.034 ********* 2026-04-07 02:38:05.099163 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:38:05.099166 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:38:05.099170 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:38:05.099174 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:38:05.099178 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:38:05.099181 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:38:05.099185 | orchestrator | 2026-04-07 02:38:05.099189 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 02:38:05.099193 | orchestrator | Tuesday 07 April 2026 02:37:52 +0000 (0:00:00.911) 0:01:02.946 ********* 2026-04-07 02:38:05.099197 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:38:05.099200 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:38:05.099204 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:38:05.099208 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:38:05.099211 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:38:05.099215 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:38:05.099223 | orchestrator | 2026-04-07 02:38:05.099226 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 02:38:05.099230 | orchestrator | Tuesday 07 April 2026 02:37:53 +0000 (0:00:00.710) 0:01:03.657 ********* 2026-04-07 02:38:05.099234 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:38:05.099250 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:38:05.099254 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:38:05.099258 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:38:05.099262 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:38:05.099265 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:38:05.099269 | orchestrator | 2026-04-07 02:38:05.099273 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-07 02:38:05.099277 | orchestrator | Tuesday 07 April 2026 02:37:54 +0000 (0:00:01.462) 0:01:05.119 ********* 2026-04-07 02:38:05.099281 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:38:05.099286 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:38:05.099291 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:38:05.099295 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:38:05.099300 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:38:05.099304 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:38:05.099308 | orchestrator | 2026-04-07 02:38:05.099313 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-07 02:38:05.099318 | orchestrator | Tuesday 07 April 2026 02:37:56 +0000 (0:00:01.841) 0:01:06.961 ********* 2026-04-07 02:38:05.099322 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:38:05.099327 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:38:05.099331 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:38:05.099335 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:38:05.099340 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:38:05.099345 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:38:05.099349 | orchestrator | 2026-04-07 02:38:05.099354 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-07 02:38:05.099358 | orchestrator | Tuesday 07 April 2026 02:37:58 +0000 (0:00:02.326) 0:01:09.287 ********* 2026-04-07 02:38:05.099364 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:38:05.099370 | orchestrator | 2026-04-07 02:38:05.099374 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-07 02:38:05.099380 | orchestrator | Tuesday 07 April 2026 02:38:00 +0000 (0:00:01.294) 0:01:10.581 ********* 2026-04-07 02:38:05.099384 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:38:05.099389 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:38:05.099394 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:38:05.099398 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.099403 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.099407 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.099411 | orchestrator | 2026-04-07 02:38:05.099415 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-07 02:38:05.099419 | orchestrator | Tuesday 07 April 2026 02:38:00 +0000 (0:00:00.663) 0:01:11.245 ********* 2026-04-07 02:38:05.099423 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:38:05.099426 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:38:05.099430 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:38:05.099434 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.099438 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.099441 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.099445 | orchestrator | 2026-04-07 02:38:05.099449 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-07 02:38:05.099453 | orchestrator | Tuesday 07 April 2026 02:38:01 +0000 (0:00:00.906) 0:01:12.151 ********* 2026-04-07 02:38:05.099456 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 02:38:05.099463 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 02:38:05.099471 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 02:38:05.099475 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 02:38:05.099479 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 02:38:05.099501 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 02:38:05.099506 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 02:38:05.099509 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 02:38:05.099513 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 02:38:05.099517 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 02:38:05.099521 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 02:38:05.099525 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 02:38:05.099528 | orchestrator | 2026-04-07 02:38:05.099532 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-07 02:38:05.099536 | orchestrator | Tuesday 07 April 2026 02:38:03 +0000 (0:00:01.433) 0:01:13.584 ********* 2026-04-07 02:38:05.099540 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:38:05.099544 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:38:05.099547 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:38:05.099551 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:38:05.099555 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:38:05.099559 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:38:05.099563 | orchestrator | 2026-04-07 02:38:05.099566 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-07 02:38:05.099570 | orchestrator | Tuesday 07 April 2026 02:38:04 +0000 (0:00:01.307) 0:01:14.892 ********* 2026-04-07 02:38:05.099574 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:38:05.099578 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:38:05.099582 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:38:05.099585 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:38:05.099589 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:38:05.099593 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:38:05.099597 | orchestrator | 2026-04-07 02:38:05.099604 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-07 02:39:27.730972 | orchestrator | Tuesday 07 April 2026 02:38:05 +0000 (0:00:00.737) 0:01:15.629 ********* 2026-04-07 02:39:27.731075 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.731091 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.731102 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.731112 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.731122 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.731132 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.731142 | orchestrator | 2026-04-07 02:39:27.731152 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-07 02:39:27.731163 | orchestrator | Tuesday 07 April 2026 02:38:05 +0000 (0:00:00.903) 0:01:16.532 ********* 2026-04-07 02:39:27.731173 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.731183 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.731193 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.731202 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.731212 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.731221 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.731231 | orchestrator | 2026-04-07 02:39:27.731241 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-07 02:39:27.731251 | orchestrator | Tuesday 07 April 2026 02:38:06 +0000 (0:00:00.631) 0:01:17.163 ********* 2026-04-07 02:39:27.731286 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:39:27.731305 | orchestrator | 2026-04-07 02:39:27.731322 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-07 02:39:27.731337 | orchestrator | Tuesday 07 April 2026 02:38:07 +0000 (0:00:01.360) 0:01:18.524 ********* 2026-04-07 02:39:27.731352 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:27.731369 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:39:27.731384 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:39:27.731398 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:27.731414 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:27.731430 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:39:27.731445 | orchestrator | 2026-04-07 02:39:27.731463 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-07 02:39:27.731480 | orchestrator | Tuesday 07 April 2026 02:39:13 +0000 (0:01:05.232) 0:02:23.757 ********* 2026-04-07 02:39:27.731497 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 02:39:27.731582 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 02:39:27.731600 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 02:39:27.731614 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.731626 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 02:39:27.731637 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 02:39:27.731649 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 02:39:27.731660 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.731670 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 02:39:27.731680 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 02:39:27.731703 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 02:39:27.731713 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.731723 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 02:39:27.731733 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 02:39:27.731742 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 02:39:27.731751 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.731761 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 02:39:27.731770 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 02:39:27.731780 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 02:39:27.731789 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.731798 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 02:39:27.731808 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 02:39:27.731817 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 02:39:27.731827 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.731836 | orchestrator | 2026-04-07 02:39:27.731846 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-07 02:39:27.731855 | orchestrator | Tuesday 07 April 2026 02:39:13 +0000 (0:00:00.756) 0:02:24.513 ********* 2026-04-07 02:39:27.731865 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.731874 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.731885 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.731894 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.731903 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.731922 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.731932 | orchestrator | 2026-04-07 02:39:27.731942 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-07 02:39:27.731951 | orchestrator | Tuesday 07 April 2026 02:39:14 +0000 (0:00:00.961) 0:02:25.475 ********* 2026-04-07 02:39:27.731961 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.731970 | orchestrator | 2026-04-07 02:39:27.731993 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-07 02:39:27.732013 | orchestrator | Tuesday 07 April 2026 02:39:15 +0000 (0:00:00.149) 0:02:25.624 ********* 2026-04-07 02:39:27.732022 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.732050 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.732060 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.732070 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.732079 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.732088 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.732098 | orchestrator | 2026-04-07 02:39:27.732107 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-07 02:39:27.732117 | orchestrator | Tuesday 07 April 2026 02:39:15 +0000 (0:00:00.765) 0:02:26.390 ********* 2026-04-07 02:39:27.732126 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.732135 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.732145 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.732154 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.732163 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.732172 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.732181 | orchestrator | 2026-04-07 02:39:27.732191 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-07 02:39:27.732200 | orchestrator | Tuesday 07 April 2026 02:39:16 +0000 (0:00:00.990) 0:02:27.381 ********* 2026-04-07 02:39:27.732210 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.732219 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.732228 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.732238 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.732247 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.732256 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.732265 | orchestrator | 2026-04-07 02:39:27.732275 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-07 02:39:27.732284 | orchestrator | Tuesday 07 April 2026 02:39:17 +0000 (0:00:00.669) 0:02:28.050 ********* 2026-04-07 02:39:27.732294 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:27.732303 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:27.732312 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:27.732322 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:39:27.732331 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:39:27.732343 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:39:27.732360 | orchestrator | 2026-04-07 02:39:27.732375 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-07 02:39:27.732391 | orchestrator | Tuesday 07 April 2026 02:39:20 +0000 (0:00:03.378) 0:02:31.429 ********* 2026-04-07 02:39:27.732406 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:27.732422 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:27.732438 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:27.732466 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:39:27.732480 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:39:27.732494 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:39:27.732577 | orchestrator | 2026-04-07 02:39:27.732595 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-07 02:39:27.732610 | orchestrator | Tuesday 07 April 2026 02:39:21 +0000 (0:00:00.715) 0:02:32.145 ********* 2026-04-07 02:39:27.732627 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:39:27.732644 | orchestrator | 2026-04-07 02:39:27.732661 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-07 02:39:27.732689 | orchestrator | Tuesday 07 April 2026 02:39:23 +0000 (0:00:01.644) 0:02:33.790 ********* 2026-04-07 02:39:27.732707 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.732724 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.732740 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.732756 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.732779 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.732789 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.732798 | orchestrator | 2026-04-07 02:39:27.732808 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-07 02:39:27.732817 | orchestrator | Tuesday 07 April 2026 02:39:24 +0000 (0:00:00.951) 0:02:34.741 ********* 2026-04-07 02:39:27.732827 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.732836 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.732845 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.732854 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.732864 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.732873 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.732882 | orchestrator | 2026-04-07 02:39:27.732892 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-07 02:39:27.732901 | orchestrator | Tuesday 07 April 2026 02:39:24 +0000 (0:00:00.721) 0:02:35.463 ********* 2026-04-07 02:39:27.732911 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.732920 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.732929 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.732938 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.732948 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.732957 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.732967 | orchestrator | 2026-04-07 02:39:27.732976 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-07 02:39:27.733000 | orchestrator | Tuesday 07 April 2026 02:39:26 +0000 (0:00:01.135) 0:02:36.599 ********* 2026-04-07 02:39:27.733010 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.733029 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.733038 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.733054 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.733070 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.733085 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.733101 | orchestrator | 2026-04-07 02:39:27.733116 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-07 02:39:27.733129 | orchestrator | Tuesday 07 April 2026 02:39:26 +0000 (0:00:00.667) 0:02:37.266 ********* 2026-04-07 02:39:27.733144 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:27.733158 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:27.733173 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:27.733188 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:27.733203 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:27.733217 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:27.733232 | orchestrator | 2026-04-07 02:39:27.733248 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-07 02:39:27.733282 | orchestrator | Tuesday 07 April 2026 02:39:27 +0000 (0:00:00.993) 0:02:38.259 ********* 2026-04-07 02:39:40.001643 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:40.001741 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:40.001758 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:40.001774 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:40.001788 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:40.001802 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:40.001818 | orchestrator | 2026-04-07 02:39:40.001833 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-07 02:39:40.001849 | orchestrator | Tuesday 07 April 2026 02:39:28 +0000 (0:00:00.749) 0:02:39.008 ********* 2026-04-07 02:39:40.001879 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:40.001888 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:40.001896 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:40.001905 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:40.001913 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:40.001920 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:40.001928 | orchestrator | 2026-04-07 02:39:40.001936 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-07 02:39:40.001944 | orchestrator | Tuesday 07 April 2026 02:39:29 +0000 (0:00:00.987) 0:02:39.996 ********* 2026-04-07 02:39:40.001952 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:40.001959 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:40.001967 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:40.001975 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:40.001988 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:40.002001 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:40.002074 | orchestrator | 2026-04-07 02:39:40.002093 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-07 02:39:40.002106 | orchestrator | Tuesday 07 April 2026 02:39:30 +0000 (0:00:00.704) 0:02:40.700 ********* 2026-04-07 02:39:40.002117 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:40.002127 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:40.002136 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:40.002146 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:39:40.002156 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:39:40.002165 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:39:40.002174 | orchestrator | 2026-04-07 02:39:40.002184 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-07 02:39:40.002194 | orchestrator | Tuesday 07 April 2026 02:39:31 +0000 (0:00:01.449) 0:02:42.150 ********* 2026-04-07 02:39:40.002204 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:39:40.002215 | orchestrator | 2026-04-07 02:39:40.002225 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-07 02:39:40.002234 | orchestrator | Tuesday 07 April 2026 02:39:32 +0000 (0:00:01.353) 0:02:43.503 ********* 2026-04-07 02:39:40.002244 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-07 02:39:40.002253 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-07 02:39:40.002263 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-07 02:39:40.002272 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-07 02:39:40.002281 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-07 02:39:40.002290 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-07 02:39:40.002300 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-07 02:39:40.002323 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-07 02:39:40.002335 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-07 02:39:40.002349 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-07 02:39:40.002363 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-07 02:39:40.002376 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-07 02:39:40.002386 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-07 02:39:40.002396 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-07 02:39:40.002405 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-07 02:39:40.002414 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-07 02:39:40.002423 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-07 02:39:40.002432 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-07 02:39:40.002442 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-07 02:39:40.002458 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-07 02:39:40.002468 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-07 02:39:40.002477 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-07 02:39:40.002487 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-07 02:39:40.002497 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-07 02:39:40.002536 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-07 02:39:40.002546 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-07 02:39:40.002554 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-07 02:39:40.002562 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-07 02:39:40.002569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-07 02:39:40.002577 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-07 02:39:40.002585 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-07 02:39:40.002593 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-07 02:39:40.002601 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-07 02:39:40.002609 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-07 02:39:40.002617 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-07 02:39:40.002642 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-07 02:39:40.002650 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-07 02:39:40.002658 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-07 02:39:40.002666 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-07 02:39:40.002674 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-07 02:39:40.002682 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 02:39:40.002690 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-07 02:39:40.002697 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-07 02:39:40.002705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-07 02:39:40.002713 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-07 02:39:40.002721 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-07 02:39:40.002729 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 02:39:40.002736 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-07 02:39:40.002744 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-07 02:39:40.002752 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 02:39:40.002760 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 02:39:40.002767 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-07 02:39:40.002775 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 02:39:40.002783 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 02:39:40.002791 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 02:39:40.002799 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 02:39:40.002807 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 02:39:40.002814 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 02:39:40.002822 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 02:39:40.002830 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 02:39:40.002838 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 02:39:40.002851 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 02:39:40.002859 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 02:39:40.002867 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 02:39:40.002875 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 02:39:40.002883 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 02:39:40.002891 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 02:39:40.002904 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 02:39:40.002912 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 02:39:40.002920 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 02:39:40.002927 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 02:39:40.002935 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 02:39:40.002943 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 02:39:40.002951 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 02:39:40.002959 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 02:39:40.002966 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 02:39:40.002974 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 02:39:40.002982 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 02:39:40.002990 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 02:39:40.002998 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 02:39:40.003005 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-07 02:39:40.003014 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-07 02:39:40.003021 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 02:39:40.003029 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-07 02:39:40.003037 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 02:39:40.003045 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 02:39:40.003053 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-07 02:39:40.003061 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-07 02:39:40.003069 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-07 02:39:40.003077 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-07 02:39:40.003085 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 02:39:40.003098 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-07 02:39:56.192443 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-07 02:39:56.192615 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-07 02:39:56.192632 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-07 02:39:56.192642 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-07 02:39:56.192651 | orchestrator | 2026-04-07 02:39:56.192662 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-07 02:39:56.192672 | orchestrator | Tuesday 07 April 2026 02:39:39 +0000 (0:00:06.973) 0:02:50.477 ********* 2026-04-07 02:39:56.192681 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.192691 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.192700 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.192710 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:39:56.192743 | orchestrator | 2026-04-07 02:39:56.192752 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-07 02:39:56.192761 | orchestrator | Tuesday 07 April 2026 02:39:41 +0000 (0:00:01.207) 0:02:51.684 ********* 2026-04-07 02:39:56.192770 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.192780 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.192803 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.192812 | orchestrator | 2026-04-07 02:39:56.192821 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-07 02:39:56.192840 | orchestrator | Tuesday 07 April 2026 02:39:41 +0000 (0:00:00.756) 0:02:52.441 ********* 2026-04-07 02:39:56.192849 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.192857 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.192866 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.192875 | orchestrator | 2026-04-07 02:39:56.192884 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-07 02:39:56.192893 | orchestrator | Tuesday 07 April 2026 02:39:43 +0000 (0:00:01.402) 0:02:53.843 ********* 2026-04-07 02:39:56.192901 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:56.192922 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:56.192935 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:56.192953 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.192974 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.192988 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.193014 | orchestrator | 2026-04-07 02:39:56.193027 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-07 02:39:56.193058 | orchestrator | Tuesday 07 April 2026 02:39:44 +0000 (0:00:00.963) 0:02:54.807 ********* 2026-04-07 02:39:56.193074 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:56.193089 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:56.193104 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:56.193114 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.193124 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.193134 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.193144 | orchestrator | 2026-04-07 02:39:56.193160 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-07 02:39:56.193182 | orchestrator | Tuesday 07 April 2026 02:39:44 +0000 (0:00:00.669) 0:02:55.477 ********* 2026-04-07 02:39:56.193198 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:56.193211 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:56.193225 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:56.193238 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.193250 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.193262 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.193275 | orchestrator | 2026-04-07 02:39:56.193288 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-07 02:39:56.193301 | orchestrator | Tuesday 07 April 2026 02:39:45 +0000 (0:00:00.976) 0:02:56.453 ********* 2026-04-07 02:39:56.193314 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:56.193328 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:56.193341 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:56.193355 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.193369 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.193397 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.193412 | orchestrator | 2026-04-07 02:39:56.193426 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-07 02:39:56.193441 | orchestrator | Tuesday 07 April 2026 02:39:46 +0000 (0:00:00.665) 0:02:57.119 ********* 2026-04-07 02:39:56.193454 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:56.193470 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:56.193485 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:56.193498 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.193507 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.193571 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.193580 | orchestrator | 2026-04-07 02:39:56.193589 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-07 02:39:56.193599 | orchestrator | Tuesday 07 April 2026 02:39:47 +0000 (0:00:00.994) 0:02:58.113 ********* 2026-04-07 02:39:56.193608 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:56.193617 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:56.193626 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:56.193635 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.193673 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.193695 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.193709 | orchestrator | 2026-04-07 02:39:56.193723 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-07 02:39:56.193738 | orchestrator | Tuesday 07 April 2026 02:39:48 +0000 (0:00:00.653) 0:02:58.766 ********* 2026-04-07 02:39:56.193753 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:56.193769 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:56.193783 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:56.193797 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.193813 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.193827 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.193842 | orchestrator | 2026-04-07 02:39:56.193859 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-07 02:39:56.193875 | orchestrator | Tuesday 07 April 2026 02:39:49 +0000 (0:00:00.939) 0:02:59.706 ********* 2026-04-07 02:39:56.193892 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:56.193907 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:56.193916 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:56.193925 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.193933 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.193942 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.193950 | orchestrator | 2026-04-07 02:39:56.193959 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-07 02:39:56.193967 | orchestrator | Tuesday 07 April 2026 02:39:49 +0000 (0:00:00.671) 0:03:00.377 ********* 2026-04-07 02:39:56.193976 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.193985 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.193993 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.194002 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:56.194011 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:56.194073 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:56.194082 | orchestrator | 2026-04-07 02:39:56.194091 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-07 02:39:56.194103 | orchestrator | Tuesday 07 April 2026 02:39:52 +0000 (0:00:02.783) 0:03:03.161 ********* 2026-04-07 02:39:56.194118 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:56.194131 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:56.194145 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:56.194159 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.194174 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.194188 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.194203 | orchestrator | 2026-04-07 02:39:56.194214 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-07 02:39:56.194236 | orchestrator | Tuesday 07 April 2026 02:39:53 +0000 (0:00:00.629) 0:03:03.790 ********* 2026-04-07 02:39:56.194245 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:39:56.194253 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:39:56.194262 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:39:56.194271 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.194279 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.194288 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.194296 | orchestrator | 2026-04-07 02:39:56.194305 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-07 02:39:56.194314 | orchestrator | Tuesday 07 April 2026 02:39:54 +0000 (0:00:01.005) 0:03:04.796 ********* 2026-04-07 02:39:56.194322 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:56.194331 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:39:56.194348 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:39:56.194357 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.194365 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.194374 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.194382 | orchestrator | 2026-04-07 02:39:56.194391 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-07 02:39:56.194400 | orchestrator | Tuesday 07 April 2026 02:39:54 +0000 (0:00:00.677) 0:03:05.473 ********* 2026-04-07 02:39:56.194409 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.194419 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.194427 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 02:39:56.194436 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:39:56.194444 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:39:56.194453 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:39:56.194462 | orchestrator | 2026-04-07 02:39:56.194470 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-07 02:39:56.194479 | orchestrator | Tuesday 07 April 2026 02:39:55 +0000 (0:00:00.991) 0:03:06.464 ********* 2026-04-07 02:39:56.194490 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-07 02:39:56.194503 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-07 02:39:56.194565 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:39:56.194587 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-07 02:40:15.673665 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-07 02:40:15.673760 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:15.673773 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-07 02:40:15.673798 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-07 02:40:15.673805 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:15.673811 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.673817 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.673823 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.673830 | orchestrator | 2026-04-07 02:40:15.673837 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-07 02:40:15.673845 | orchestrator | Tuesday 07 April 2026 02:39:56 +0000 (0:00:00.723) 0:03:07.188 ********* 2026-04-07 02:40:15.673851 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.673857 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:15.673863 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:15.673869 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.673875 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.673881 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.673887 | orchestrator | 2026-04-07 02:40:15.673893 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-07 02:40:15.673900 | orchestrator | Tuesday 07 April 2026 02:39:57 +0000 (0:00:01.011) 0:03:08.199 ********* 2026-04-07 02:40:15.673906 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.673912 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:15.673918 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:15.673924 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.673930 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.673936 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.673942 | orchestrator | 2026-04-07 02:40:15.673949 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 02:40:15.673968 | orchestrator | Tuesday 07 April 2026 02:39:58 +0000 (0:00:00.666) 0:03:08.866 ********* 2026-04-07 02:40:15.673974 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.673980 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:15.673986 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:15.673992 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.673998 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.674004 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.674010 | orchestrator | 2026-04-07 02:40:15.674060 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 02:40:15.674067 | orchestrator | Tuesday 07 April 2026 02:39:59 +0000 (0:00:01.014) 0:03:09.881 ********* 2026-04-07 02:40:15.674073 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.674079 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:15.674086 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:15.674092 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.674098 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.674104 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.674110 | orchestrator | 2026-04-07 02:40:15.674116 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 02:40:15.674122 | orchestrator | Tuesday 07 April 2026 02:40:00 +0000 (0:00:00.946) 0:03:10.828 ********* 2026-04-07 02:40:15.674128 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.674134 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:15.674140 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:15.674146 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.674153 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.674159 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.674171 | orchestrator | 2026-04-07 02:40:15.674177 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 02:40:15.674183 | orchestrator | Tuesday 07 April 2026 02:40:00 +0000 (0:00:00.711) 0:03:11.539 ********* 2026-04-07 02:40:15.674190 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:40:15.674197 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:40:15.674203 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:40:15.674209 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.674217 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.674224 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.674231 | orchestrator | 2026-04-07 02:40:15.674238 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 02:40:15.674245 | orchestrator | Tuesday 07 April 2026 02:40:01 +0000 (0:00:00.949) 0:03:12.488 ********* 2026-04-07 02:40:15.674252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:40:15.674259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:40:15.674267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:40:15.674274 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.674281 | orchestrator | 2026-04-07 02:40:15.674289 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 02:40:15.674296 | orchestrator | Tuesday 07 April 2026 02:40:02 +0000 (0:00:00.462) 0:03:12.951 ********* 2026-04-07 02:40:15.674316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:40:15.674324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:40:15.674331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:40:15.674338 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.674345 | orchestrator | 2026-04-07 02:40:15.674352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 02:40:15.674360 | orchestrator | Tuesday 07 April 2026 02:40:02 +0000 (0:00:00.487) 0:03:13.438 ********* 2026-04-07 02:40:15.674366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:40:15.674373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:40:15.674380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:40:15.674387 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.674394 | orchestrator | 2026-04-07 02:40:15.674401 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 02:40:15.674409 | orchestrator | Tuesday 07 April 2026 02:40:03 +0000 (0:00:00.465) 0:03:13.904 ********* 2026-04-07 02:40:15.674420 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:40:15.674431 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:40:15.674443 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:40:15.674454 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.674465 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.674476 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.674486 | orchestrator | 2026-04-07 02:40:15.674496 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 02:40:15.674509 | orchestrator | Tuesday 07 April 2026 02:40:04 +0000 (0:00:00.685) 0:03:14.590 ********* 2026-04-07 02:40:15.674565 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 02:40:15.674576 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-07 02:40:15.674585 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-07 02:40:15.674595 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-07 02:40:15.674606 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.674616 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-07 02:40:15.674626 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:15.674636 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-07 02:40:15.674646 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:15.674657 | orchestrator | 2026-04-07 02:40:15.674666 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-07 02:40:15.674686 | orchestrator | Tuesday 07 April 2026 02:40:05 +0000 (0:00:01.939) 0:03:16.529 ********* 2026-04-07 02:40:15.674697 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:40:15.674707 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:40:15.674717 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:40:15.674727 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:40:15.674737 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:40:15.674747 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:40:15.674756 | orchestrator | 2026-04-07 02:40:15.674767 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 02:40:15.674777 | orchestrator | Tuesday 07 April 2026 02:40:09 +0000 (0:00:03.064) 0:03:19.594 ********* 2026-04-07 02:40:15.674787 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:40:15.674804 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:40:15.674814 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:40:15.674825 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:40:15.674835 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:40:15.674846 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:40:15.674857 | orchestrator | 2026-04-07 02:40:15.674867 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-07 02:40:15.674878 | orchestrator | Tuesday 07 April 2026 02:40:10 +0000 (0:00:01.071) 0:03:20.665 ********* 2026-04-07 02:40:15.674888 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:15.674899 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:15.674909 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:15.674920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:40:15.674931 | orchestrator | 2026-04-07 02:40:15.674941 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-07 02:40:15.674951 | orchestrator | Tuesday 07 April 2026 02:40:11 +0000 (0:00:01.256) 0:03:21.921 ********* 2026-04-07 02:40:15.674961 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:15.674971 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:15.674982 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:15.674993 | orchestrator | 2026-04-07 02:40:15.675003 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-07 02:40:15.675015 | orchestrator | Tuesday 07 April 2026 02:40:11 +0000 (0:00:00.381) 0:03:22.303 ********* 2026-04-07 02:40:15.675025 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:40:15.675035 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:40:15.675045 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:40:15.675055 | orchestrator | 2026-04-07 02:40:15.675065 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-07 02:40:15.675075 | orchestrator | Tuesday 07 April 2026 02:40:13 +0000 (0:00:01.558) 0:03:23.862 ********* 2026-04-07 02:40:15.675085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 02:40:15.675094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 02:40:15.675104 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 02:40:15.675114 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.675124 | orchestrator | 2026-04-07 02:40:15.675134 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-07 02:40:15.675143 | orchestrator | Tuesday 07 April 2026 02:40:14 +0000 (0:00:00.715) 0:03:24.577 ********* 2026-04-07 02:40:15.675151 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:15.675161 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:15.675170 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:15.675179 | orchestrator | 2026-04-07 02:40:15.675189 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-07 02:40:15.675199 | orchestrator | Tuesday 07 April 2026 02:40:14 +0000 (0:00:00.401) 0:03:24.979 ********* 2026-04-07 02:40:15.675208 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:15.675231 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:34.081276 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:34.081413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:40:34.081433 | orchestrator | 2026-04-07 02:40:34.081446 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-07 02:40:34.081463 | orchestrator | Tuesday 07 April 2026 02:40:15 +0000 (0:00:01.226) 0:03:26.206 ********* 2026-04-07 02:40:34.081476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:40:34.081490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:40:34.081504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:40:34.081517 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081580 | orchestrator | 2026-04-07 02:40:34.081588 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-07 02:40:34.081596 | orchestrator | Tuesday 07 April 2026 02:40:16 +0000 (0:00:00.421) 0:03:26.627 ********* 2026-04-07 02:40:34.081603 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081610 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:34.081618 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:34.081625 | orchestrator | 2026-04-07 02:40:34.081632 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-07 02:40:34.081640 | orchestrator | Tuesday 07 April 2026 02:40:16 +0000 (0:00:00.433) 0:03:27.060 ********* 2026-04-07 02:40:34.081647 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081654 | orchestrator | 2026-04-07 02:40:34.081662 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-07 02:40:34.081669 | orchestrator | Tuesday 07 April 2026 02:40:16 +0000 (0:00:00.258) 0:03:27.319 ********* 2026-04-07 02:40:34.081676 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081683 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:34.081691 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:34.081706 | orchestrator | 2026-04-07 02:40:34.081714 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-07 02:40:34.081721 | orchestrator | Tuesday 07 April 2026 02:40:17 +0000 (0:00:00.346) 0:03:27.665 ********* 2026-04-07 02:40:34.081728 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081735 | orchestrator | 2026-04-07 02:40:34.081743 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-07 02:40:34.081750 | orchestrator | Tuesday 07 April 2026 02:40:17 +0000 (0:00:00.831) 0:03:28.497 ********* 2026-04-07 02:40:34.081757 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081764 | orchestrator | 2026-04-07 02:40:34.081772 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-07 02:40:34.081779 | orchestrator | Tuesday 07 April 2026 02:40:18 +0000 (0:00:00.274) 0:03:28.772 ********* 2026-04-07 02:40:34.081786 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081793 | orchestrator | 2026-04-07 02:40:34.081801 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-07 02:40:34.081808 | orchestrator | Tuesday 07 April 2026 02:40:18 +0000 (0:00:00.155) 0:03:28.927 ********* 2026-04-07 02:40:34.081830 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081843 | orchestrator | 2026-04-07 02:40:34.081856 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-07 02:40:34.081868 | orchestrator | Tuesday 07 April 2026 02:40:18 +0000 (0:00:00.304) 0:03:29.231 ********* 2026-04-07 02:40:34.081881 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081893 | orchestrator | 2026-04-07 02:40:34.081905 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-07 02:40:34.081919 | orchestrator | Tuesday 07 April 2026 02:40:18 +0000 (0:00:00.287) 0:03:29.519 ********* 2026-04-07 02:40:34.081932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:40:34.081945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:40:34.081958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:40:34.081982 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.081995 | orchestrator | 2026-04-07 02:40:34.082008 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-07 02:40:34.082084 | orchestrator | Tuesday 07 April 2026 02:40:19 +0000 (0:00:00.427) 0:03:29.946 ********* 2026-04-07 02:40:34.082098 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.082111 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:34.082124 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:34.082136 | orchestrator | 2026-04-07 02:40:34.082149 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-07 02:40:34.082161 | orchestrator | Tuesday 07 April 2026 02:40:19 +0000 (0:00:00.332) 0:03:30.278 ********* 2026-04-07 02:40:34.082173 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.082185 | orchestrator | 2026-04-07 02:40:34.082207 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-07 02:40:34.082220 | orchestrator | Tuesday 07 April 2026 02:40:19 +0000 (0:00:00.244) 0:03:30.522 ********* 2026-04-07 02:40:34.082232 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.082245 | orchestrator | 2026-04-07 02:40:34.082258 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-07 02:40:34.082270 | orchestrator | Tuesday 07 April 2026 02:40:20 +0000 (0:00:00.234) 0:03:30.757 ********* 2026-04-07 02:40:34.082282 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:34.082294 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:34.082307 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:34.082319 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:40:34.082332 | orchestrator | 2026-04-07 02:40:34.082344 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-07 02:40:34.082356 | orchestrator | Tuesday 07 April 2026 02:40:21 +0000 (0:00:01.290) 0:03:32.047 ********* 2026-04-07 02:40:34.082369 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:40:34.082384 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:40:34.082396 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:40:34.082408 | orchestrator | 2026-04-07 02:40:34.082439 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-07 02:40:34.082453 | orchestrator | Tuesday 07 April 2026 02:40:21 +0000 (0:00:00.375) 0:03:32.422 ********* 2026-04-07 02:40:34.082466 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:40:34.082479 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:40:34.082491 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:40:34.082502 | orchestrator | 2026-04-07 02:40:34.082515 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-07 02:40:34.082560 | orchestrator | Tuesday 07 April 2026 02:40:23 +0000 (0:00:01.588) 0:03:34.011 ********* 2026-04-07 02:40:34.082573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:40:34.082584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:40:34.082596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:40:34.082607 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.082619 | orchestrator | 2026-04-07 02:40:34.082630 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-07 02:40:34.082642 | orchestrator | Tuesday 07 April 2026 02:40:24 +0000 (0:00:00.703) 0:03:34.715 ********* 2026-04-07 02:40:34.082653 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:40:34.082665 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:40:34.082677 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:40:34.082690 | orchestrator | 2026-04-07 02:40:34.082701 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-07 02:40:34.082712 | orchestrator | Tuesday 07 April 2026 02:40:24 +0000 (0:00:00.361) 0:03:35.077 ********* 2026-04-07 02:40:34.082724 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:34.082734 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:34.082747 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:34.082770 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:40:34.082783 | orchestrator | 2026-04-07 02:40:34.082796 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-07 02:40:34.082808 | orchestrator | Tuesday 07 April 2026 02:40:25 +0000 (0:00:01.185) 0:03:36.263 ********* 2026-04-07 02:40:34.082822 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:40:34.082834 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:40:34.082846 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:40:34.082858 | orchestrator | 2026-04-07 02:40:34.082871 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-07 02:40:34.082884 | orchestrator | Tuesday 07 April 2026 02:40:26 +0000 (0:00:00.390) 0:03:36.653 ********* 2026-04-07 02:40:34.082896 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:40:34.082908 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:40:34.082920 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:40:34.082932 | orchestrator | 2026-04-07 02:40:34.082944 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-07 02:40:34.082957 | orchestrator | Tuesday 07 April 2026 02:40:27 +0000 (0:00:01.289) 0:03:37.942 ********* 2026-04-07 02:40:34.082970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:40:34.082990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:40:34.083002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:40:34.083014 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.083026 | orchestrator | 2026-04-07 02:40:34.083039 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-07 02:40:34.083051 | orchestrator | Tuesday 07 April 2026 02:40:28 +0000 (0:00:01.020) 0:03:38.963 ********* 2026-04-07 02:40:34.083063 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:40:34.083076 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:40:34.083088 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:40:34.083100 | orchestrator | 2026-04-07 02:40:34.083112 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-07 02:40:34.083124 | orchestrator | Tuesday 07 April 2026 02:40:29 +0000 (0:00:00.653) 0:03:39.616 ********* 2026-04-07 02:40:34.083136 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.083148 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:34.083160 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:34.083173 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:34.083185 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:34.083197 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:34.083209 | orchestrator | 2026-04-07 02:40:34.083220 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-07 02:40:34.083233 | orchestrator | Tuesday 07 April 2026 02:40:29 +0000 (0:00:00.756) 0:03:40.373 ********* 2026-04-07 02:40:34.083245 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:40:34.083257 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:40:34.083270 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:40:34.083282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:40:34.083294 | orchestrator | 2026-04-07 02:40:34.083306 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-07 02:40:34.083318 | orchestrator | Tuesday 07 April 2026 02:40:31 +0000 (0:00:01.237) 0:03:41.610 ********* 2026-04-07 02:40:34.083329 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:34.083341 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:34.083354 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:34.083367 | orchestrator | 2026-04-07 02:40:34.083379 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-07 02:40:34.083391 | orchestrator | Tuesday 07 April 2026 02:40:31 +0000 (0:00:00.430) 0:03:42.040 ********* 2026-04-07 02:40:34.083402 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:40:34.083422 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:40:34.083435 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:40:34.083447 | orchestrator | 2026-04-07 02:40:34.083459 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-07 02:40:34.083472 | orchestrator | Tuesday 07 April 2026 02:40:32 +0000 (0:00:01.298) 0:03:43.339 ********* 2026-04-07 02:40:34.083484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 02:40:34.083496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 02:40:34.083518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 02:40:52.637257 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.637364 | orchestrator | 2026-04-07 02:40:52.637379 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-07 02:40:52.637390 | orchestrator | Tuesday 07 April 2026 02:40:34 +0000 (0:00:01.273) 0:03:44.612 ********* 2026-04-07 02:40:52.637399 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.637409 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.637418 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.637427 | orchestrator | 2026-04-07 02:40:52.637436 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-07 02:40:52.637445 | orchestrator | 2026-04-07 02:40:52.637454 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 02:40:52.637462 | orchestrator | Tuesday 07 April 2026 02:40:34 +0000 (0:00:00.648) 0:03:45.260 ********* 2026-04-07 02:40:52.637472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:40:52.637483 | orchestrator | 2026-04-07 02:40:52.637491 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 02:40:52.637500 | orchestrator | Tuesday 07 April 2026 02:40:35 +0000 (0:00:00.852) 0:03:46.113 ********* 2026-04-07 02:40:52.637509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:40:52.637518 | orchestrator | 2026-04-07 02:40:52.637580 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 02:40:52.637592 | orchestrator | Tuesday 07 April 2026 02:40:36 +0000 (0:00:00.585) 0:03:46.698 ********* 2026-04-07 02:40:52.637601 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.637610 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.637618 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.637627 | orchestrator | 2026-04-07 02:40:52.637636 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 02:40:52.637644 | orchestrator | Tuesday 07 April 2026 02:40:36 +0000 (0:00:00.736) 0:03:47.434 ********* 2026-04-07 02:40:52.637653 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.637662 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.637670 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.637679 | orchestrator | 2026-04-07 02:40:52.637688 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 02:40:52.637697 | orchestrator | Tuesday 07 April 2026 02:40:37 +0000 (0:00:00.657) 0:03:48.092 ********* 2026-04-07 02:40:52.637705 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.637714 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.637723 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.637731 | orchestrator | 2026-04-07 02:40:52.637740 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 02:40:52.637749 | orchestrator | Tuesday 07 April 2026 02:40:37 +0000 (0:00:00.413) 0:03:48.505 ********* 2026-04-07 02:40:52.637757 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.637766 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.637790 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.637799 | orchestrator | 2026-04-07 02:40:52.637808 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 02:40:52.637816 | orchestrator | Tuesday 07 April 2026 02:40:38 +0000 (0:00:00.396) 0:03:48.901 ********* 2026-04-07 02:40:52.637846 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.637855 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.637864 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.637873 | orchestrator | 2026-04-07 02:40:52.637888 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 02:40:52.637903 | orchestrator | Tuesday 07 April 2026 02:40:39 +0000 (0:00:00.791) 0:03:49.693 ********* 2026-04-07 02:40:52.637917 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.637930 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.637944 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.637957 | orchestrator | 2026-04-07 02:40:52.637971 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 02:40:52.637986 | orchestrator | Tuesday 07 April 2026 02:40:39 +0000 (0:00:00.664) 0:03:50.357 ********* 2026-04-07 02:40:52.638001 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.638072 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.638083 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.638092 | orchestrator | 2026-04-07 02:40:52.638100 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 02:40:52.638109 | orchestrator | Tuesday 07 April 2026 02:40:40 +0000 (0:00:00.386) 0:03:50.743 ********* 2026-04-07 02:40:52.638118 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.638126 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.638143 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.638151 | orchestrator | 2026-04-07 02:40:52.638160 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 02:40:52.638169 | orchestrator | Tuesday 07 April 2026 02:40:40 +0000 (0:00:00.769) 0:03:51.513 ********* 2026-04-07 02:40:52.638177 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.638186 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.638194 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.638203 | orchestrator | 2026-04-07 02:40:52.638211 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 02:40:52.638220 | orchestrator | Tuesday 07 April 2026 02:40:41 +0000 (0:00:00.806) 0:03:52.319 ********* 2026-04-07 02:40:52.638229 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.638238 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.638246 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.638255 | orchestrator | 2026-04-07 02:40:52.638263 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 02:40:52.638272 | orchestrator | Tuesday 07 April 2026 02:40:42 +0000 (0:00:00.656) 0:03:52.976 ********* 2026-04-07 02:40:52.638281 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.638290 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.638298 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.638307 | orchestrator | 2026-04-07 02:40:52.638316 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 02:40:52.638325 | orchestrator | Tuesday 07 April 2026 02:40:42 +0000 (0:00:00.385) 0:03:53.361 ********* 2026-04-07 02:40:52.638350 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.638359 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.638368 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.638377 | orchestrator | 2026-04-07 02:40:52.638386 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 02:40:52.638394 | orchestrator | Tuesday 07 April 2026 02:40:43 +0000 (0:00:00.378) 0:03:53.739 ********* 2026-04-07 02:40:52.638403 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.638412 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.638420 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.638429 | orchestrator | 2026-04-07 02:40:52.638438 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 02:40:52.638447 | orchestrator | Tuesday 07 April 2026 02:40:43 +0000 (0:00:00.336) 0:03:54.075 ********* 2026-04-07 02:40:52.638455 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.638474 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.638483 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.638491 | orchestrator | 2026-04-07 02:40:52.638500 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 02:40:52.638509 | orchestrator | Tuesday 07 April 2026 02:40:44 +0000 (0:00:00.648) 0:03:54.723 ********* 2026-04-07 02:40:52.638517 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.638550 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.638563 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.638572 | orchestrator | 2026-04-07 02:40:52.638581 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 02:40:52.638590 | orchestrator | Tuesday 07 April 2026 02:40:44 +0000 (0:00:00.341) 0:03:55.065 ********* 2026-04-07 02:40:52.638598 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.638607 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:40:52.638615 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:40:52.638624 | orchestrator | 2026-04-07 02:40:52.638632 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 02:40:52.638641 | orchestrator | Tuesday 07 April 2026 02:40:44 +0000 (0:00:00.345) 0:03:55.411 ********* 2026-04-07 02:40:52.638650 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.638658 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.638667 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.638675 | orchestrator | 2026-04-07 02:40:52.638684 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 02:40:52.638692 | orchestrator | Tuesday 07 April 2026 02:40:45 +0000 (0:00:00.361) 0:03:55.772 ********* 2026-04-07 02:40:52.638701 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.638709 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.638718 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.638726 | orchestrator | 2026-04-07 02:40:52.638735 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 02:40:52.638744 | orchestrator | Tuesday 07 April 2026 02:40:45 +0000 (0:00:00.727) 0:03:56.500 ********* 2026-04-07 02:40:52.638752 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.638761 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.638769 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.638778 | orchestrator | 2026-04-07 02:40:52.638792 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-07 02:40:52.638801 | orchestrator | Tuesday 07 April 2026 02:40:46 +0000 (0:00:00.609) 0:03:57.110 ********* 2026-04-07 02:40:52.638810 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.638819 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.638827 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.638836 | orchestrator | 2026-04-07 02:40:52.638845 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-07 02:40:52.638853 | orchestrator | Tuesday 07 April 2026 02:40:46 +0000 (0:00:00.362) 0:03:57.473 ********* 2026-04-07 02:40:52.638863 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:40:52.638872 | orchestrator | 2026-04-07 02:40:52.638880 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-07 02:40:52.638889 | orchestrator | Tuesday 07 April 2026 02:40:47 +0000 (0:00:01.028) 0:03:58.501 ********* 2026-04-07 02:40:52.638897 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:40:52.638906 | orchestrator | 2026-04-07 02:40:52.638915 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-07 02:40:52.638923 | orchestrator | Tuesday 07 April 2026 02:40:48 +0000 (0:00:00.193) 0:03:58.694 ********* 2026-04-07 02:40:52.638932 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 02:40:52.638940 | orchestrator | 2026-04-07 02:40:52.638949 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-07 02:40:52.638957 | orchestrator | Tuesday 07 April 2026 02:40:49 +0000 (0:00:01.123) 0:03:59.818 ********* 2026-04-07 02:40:52.638972 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.638981 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.638990 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.638998 | orchestrator | 2026-04-07 02:40:52.639007 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-07 02:40:52.639015 | orchestrator | Tuesday 07 April 2026 02:40:49 +0000 (0:00:00.391) 0:04:00.209 ********* 2026-04-07 02:40:52.639024 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:40:52.639033 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:40:52.639041 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:40:52.639050 | orchestrator | 2026-04-07 02:40:52.639058 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-07 02:40:52.639067 | orchestrator | Tuesday 07 April 2026 02:40:50 +0000 (0:00:00.694) 0:04:00.904 ********* 2026-04-07 02:40:52.639075 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:40:52.639084 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:40:52.639093 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:40:52.639102 | orchestrator | 2026-04-07 02:40:52.639110 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-07 02:40:52.639119 | orchestrator | Tuesday 07 April 2026 02:40:51 +0000 (0:00:01.362) 0:04:02.266 ********* 2026-04-07 02:40:52.639128 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:40:52.639137 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:40:52.639145 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:40:52.639154 | orchestrator | 2026-04-07 02:40:52.639168 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-07 02:42:04.246680 | orchestrator | Tuesday 07 April 2026 02:40:52 +0000 (0:00:00.898) 0:04:03.165 ********* 2026-04-07 02:42:04.246802 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.246823 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:42:04.246835 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:42:04.246848 | orchestrator | 2026-04-07 02:42:04.246861 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-07 02:42:04.246874 | orchestrator | Tuesday 07 April 2026 02:40:53 +0000 (0:00:00.702) 0:04:03.867 ********* 2026-04-07 02:42:04.246886 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:04.246899 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:04.246910 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:04.246922 | orchestrator | 2026-04-07 02:42:04.246935 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-07 02:42:04.246947 | orchestrator | Tuesday 07 April 2026 02:40:54 +0000 (0:00:01.056) 0:04:04.924 ********* 2026-04-07 02:42:04.246960 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.246972 | orchestrator | 2026-04-07 02:42:04.246984 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-07 02:42:04.246997 | orchestrator | Tuesday 07 April 2026 02:40:55 +0000 (0:00:01.526) 0:04:06.451 ********* 2026-04-07 02:42:04.247009 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:04.247021 | orchestrator | 2026-04-07 02:42:04.247042 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-07 02:42:04.247053 | orchestrator | Tuesday 07 April 2026 02:40:56 +0000 (0:00:00.787) 0:04:07.238 ********* 2026-04-07 02:42:04.247065 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 02:42:04.247077 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:42:04.247089 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:42:04.247101 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 02:42:04.247114 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-07 02:42:04.247126 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 02:42:04.247138 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 02:42:04.247151 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-07 02:42:04.247164 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 02:42:04.247203 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-07 02:42:04.247218 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-07 02:42:04.247231 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-07 02:42:04.247244 | orchestrator | 2026-04-07 02:42:04.247263 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-07 02:42:04.247277 | orchestrator | Tuesday 07 April 2026 02:40:59 +0000 (0:00:03.254) 0:04:10.492 ********* 2026-04-07 02:42:04.247289 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.247301 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:42:04.247329 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:42:04.247343 | orchestrator | 2026-04-07 02:42:04.247356 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-07 02:42:04.247368 | orchestrator | Tuesday 07 April 2026 02:41:01 +0000 (0:00:01.246) 0:04:11.739 ********* 2026-04-07 02:42:04.247382 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:04.247395 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:04.247409 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:04.247417 | orchestrator | 2026-04-07 02:42:04.247427 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-07 02:42:04.247436 | orchestrator | Tuesday 07 April 2026 02:41:01 +0000 (0:00:00.681) 0:04:12.420 ********* 2026-04-07 02:42:04.247445 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:04.247454 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:04.247463 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:04.247471 | orchestrator | 2026-04-07 02:42:04.247481 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-07 02:42:04.247489 | orchestrator | Tuesday 07 April 2026 02:41:02 +0000 (0:00:00.402) 0:04:12.823 ********* 2026-04-07 02:42:04.247497 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:42:04.247504 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.247512 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:42:04.247519 | orchestrator | 2026-04-07 02:42:04.247526 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-07 02:42:04.247533 | orchestrator | Tuesday 07 April 2026 02:41:03 +0000 (0:00:01.460) 0:04:14.284 ********* 2026-04-07 02:42:04.247540 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.247576 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:42:04.247589 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:42:04.247608 | orchestrator | 2026-04-07 02:42:04.247623 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-07 02:42:04.247635 | orchestrator | Tuesday 07 April 2026 02:41:05 +0000 (0:00:01.321) 0:04:15.606 ********* 2026-04-07 02:42:04.247648 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:04.247661 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:04.247672 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:04.247684 | orchestrator | 2026-04-07 02:42:04.247695 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-07 02:42:04.247708 | orchestrator | Tuesday 07 April 2026 02:41:05 +0000 (0:00:00.667) 0:04:16.274 ********* 2026-04-07 02:42:04.247720 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:42:04.247732 | orchestrator | 2026-04-07 02:42:04.247746 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-07 02:42:04.247760 | orchestrator | Tuesday 07 April 2026 02:41:06 +0000 (0:00:00.619) 0:04:16.893 ********* 2026-04-07 02:42:04.247773 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:04.247787 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:04.247801 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:04.247809 | orchestrator | 2026-04-07 02:42:04.247816 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-07 02:42:04.247845 | orchestrator | Tuesday 07 April 2026 02:41:06 +0000 (0:00:00.344) 0:04:17.238 ********* 2026-04-07 02:42:04.247852 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:04.247874 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:04.247882 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:04.247889 | orchestrator | 2026-04-07 02:42:04.247897 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-07 02:42:04.247904 | orchestrator | Tuesday 07 April 2026 02:41:07 +0000 (0:00:00.671) 0:04:17.909 ********* 2026-04-07 02:42:04.247912 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:42:04.247920 | orchestrator | 2026-04-07 02:42:04.247928 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-07 02:42:04.247935 | orchestrator | Tuesday 07 April 2026 02:41:07 +0000 (0:00:00.597) 0:04:18.506 ********* 2026-04-07 02:42:04.247942 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.247950 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:42:04.247957 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:42:04.247964 | orchestrator | 2026-04-07 02:42:04.247972 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-07 02:42:04.247979 | orchestrator | Tuesday 07 April 2026 02:41:10 +0000 (0:00:02.054) 0:04:20.561 ********* 2026-04-07 02:42:04.247987 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.247994 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:42:04.248001 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:42:04.248008 | orchestrator | 2026-04-07 02:42:04.248016 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-07 02:42:04.248023 | orchestrator | Tuesday 07 April 2026 02:41:11 +0000 (0:00:01.519) 0:04:22.080 ********* 2026-04-07 02:42:04.248030 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.248037 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:42:04.248045 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:42:04.248052 | orchestrator | 2026-04-07 02:42:04.248059 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-07 02:42:04.248067 | orchestrator | Tuesday 07 April 2026 02:41:13 +0000 (0:00:01.775) 0:04:23.856 ********* 2026-04-07 02:42:04.248074 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:42:04.248081 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:42:04.248089 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:42:04.248096 | orchestrator | 2026-04-07 02:42:04.248103 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-07 02:42:04.248110 | orchestrator | Tuesday 07 April 2026 02:41:15 +0000 (0:00:02.075) 0:04:25.931 ********* 2026-04-07 02:42:04.248118 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:42:04.248129 | orchestrator | 2026-04-07 02:42:04.248149 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-07 02:42:04.248166 | orchestrator | Tuesday 07 April 2026 02:41:16 +0000 (0:00:00.952) 0:04:26.883 ********* 2026-04-07 02:42:04.248185 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-07 02:42:04.248197 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:04.248208 | orchestrator | 2026-04-07 02:42:04.248219 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-07 02:42:04.248231 | orchestrator | Tuesday 07 April 2026 02:41:38 +0000 (0:00:22.070) 0:04:48.954 ********* 2026-04-07 02:42:04.248243 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:04.248256 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:04.248267 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:04.248280 | orchestrator | 2026-04-07 02:42:04.248293 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-07 02:42:04.248306 | orchestrator | Tuesday 07 April 2026 02:41:47 +0000 (0:00:09.591) 0:04:58.545 ********* 2026-04-07 02:42:04.248315 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:04.248326 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:04.248344 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:04.248369 | orchestrator | 2026-04-07 02:42:04.248381 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-07 02:42:04.248392 | orchestrator | Tuesday 07 April 2026 02:41:48 +0000 (0:00:00.390) 0:04:58.936 ********* 2026-04-07 02:42:04.248407 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4997092f5f4f9dc53ee056fabb68d6c3116fe58'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-07 02:42:04.248422 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4997092f5f4f9dc53ee056fabb68d6c3116fe58'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-07 02:42:04.248436 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4997092f5f4f9dc53ee056fabb68d6c3116fe58'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-07 02:42:04.248460 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4997092f5f4f9dc53ee056fabb68d6c3116fe58'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-07 02:42:19.418809 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4997092f5f4f9dc53ee056fabb68d6c3116fe58'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-07 02:42:19.418918 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b4997092f5f4f9dc53ee056fabb68d6c3116fe58'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b4997092f5f4f9dc53ee056fabb68d6c3116fe58'}])  2026-04-07 02:42:19.418934 | orchestrator | 2026-04-07 02:42:19.418945 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 02:42:19.418957 | orchestrator | Tuesday 07 April 2026 02:42:04 +0000 (0:00:15.839) 0:05:14.775 ********* 2026-04-07 02:42:19.418967 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.418978 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.418988 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.418998 | orchestrator | 2026-04-07 02:42:19.419008 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-07 02:42:19.419018 | orchestrator | Tuesday 07 April 2026 02:42:04 +0000 (0:00:00.411) 0:05:15.187 ********* 2026-04-07 02:42:19.419029 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:42:19.419039 | orchestrator | 2026-04-07 02:42:19.419049 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-07 02:42:19.419059 | orchestrator | Tuesday 07 April 2026 02:42:05 +0000 (0:00:00.948) 0:05:16.135 ********* 2026-04-07 02:42:19.419069 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.419080 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.419091 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.419101 | orchestrator | 2026-04-07 02:42:19.419134 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-07 02:42:19.419144 | orchestrator | Tuesday 07 April 2026 02:42:06 +0000 (0:00:00.419) 0:05:16.555 ********* 2026-04-07 02:42:19.419168 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.419178 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.419188 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.419198 | orchestrator | 2026-04-07 02:42:19.419208 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-07 02:42:19.419218 | orchestrator | Tuesday 07 April 2026 02:42:06 +0000 (0:00:00.376) 0:05:16.932 ********* 2026-04-07 02:42:19.419227 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 02:42:19.419238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 02:42:19.419247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 02:42:19.419257 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.419266 | orchestrator | 2026-04-07 02:42:19.419276 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-07 02:42:19.419286 | orchestrator | Tuesday 07 April 2026 02:42:07 +0000 (0:00:01.030) 0:05:17.962 ********* 2026-04-07 02:42:19.419296 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.419306 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.419315 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.419325 | orchestrator | 2026-04-07 02:42:19.419336 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-07 02:42:19.419348 | orchestrator | 2026-04-07 02:42:19.419359 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 02:42:19.419371 | orchestrator | Tuesday 07 April 2026 02:42:08 +0000 (0:00:01.021) 0:05:18.984 ********* 2026-04-07 02:42:19.419383 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:42:19.419397 | orchestrator | 2026-04-07 02:42:19.419409 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 02:42:19.419420 | orchestrator | Tuesday 07 April 2026 02:42:09 +0000 (0:00:00.561) 0:05:19.546 ********* 2026-04-07 02:42:19.419431 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:42:19.419442 | orchestrator | 2026-04-07 02:42:19.419453 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 02:42:19.419465 | orchestrator | Tuesday 07 April 2026 02:42:09 +0000 (0:00:00.871) 0:05:20.417 ********* 2026-04-07 02:42:19.419477 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.419487 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.419499 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.419510 | orchestrator | 2026-04-07 02:42:19.419521 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 02:42:19.419532 | orchestrator | Tuesday 07 April 2026 02:42:10 +0000 (0:00:00.794) 0:05:21.212 ********* 2026-04-07 02:42:19.419544 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.419587 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.419602 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.419614 | orchestrator | 2026-04-07 02:42:19.419626 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 02:42:19.419637 | orchestrator | Tuesday 07 April 2026 02:42:11 +0000 (0:00:00.366) 0:05:21.578 ********* 2026-04-07 02:42:19.419648 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.419658 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.419668 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.419678 | orchestrator | 2026-04-07 02:42:19.419706 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 02:42:19.419716 | orchestrator | Tuesday 07 April 2026 02:42:11 +0000 (0:00:00.671) 0:05:22.249 ********* 2026-04-07 02:42:19.419726 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.419737 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.419754 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.419764 | orchestrator | 2026-04-07 02:42:19.419774 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 02:42:19.419784 | orchestrator | Tuesday 07 April 2026 02:42:12 +0000 (0:00:00.351) 0:05:22.601 ********* 2026-04-07 02:42:19.419793 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.419803 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.419813 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.419823 | orchestrator | 2026-04-07 02:42:19.419832 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 02:42:19.419842 | orchestrator | Tuesday 07 April 2026 02:42:12 +0000 (0:00:00.774) 0:05:23.376 ********* 2026-04-07 02:42:19.419852 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.419862 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.419871 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.419881 | orchestrator | 2026-04-07 02:42:19.419891 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 02:42:19.419901 | orchestrator | Tuesday 07 April 2026 02:42:13 +0000 (0:00:00.365) 0:05:23.741 ********* 2026-04-07 02:42:19.419911 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.419921 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.419930 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.419940 | orchestrator | 2026-04-07 02:42:19.419950 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 02:42:19.419960 | orchestrator | Tuesday 07 April 2026 02:42:13 +0000 (0:00:00.682) 0:05:24.424 ********* 2026-04-07 02:42:19.419969 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.419979 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.419989 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.419999 | orchestrator | 2026-04-07 02:42:19.420008 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 02:42:19.420018 | orchestrator | Tuesday 07 April 2026 02:42:14 +0000 (0:00:00.818) 0:05:25.242 ********* 2026-04-07 02:42:19.420028 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.420038 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.420047 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.420057 | orchestrator | 2026-04-07 02:42:19.420067 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 02:42:19.420077 | orchestrator | Tuesday 07 April 2026 02:42:15 +0000 (0:00:00.837) 0:05:26.080 ********* 2026-04-07 02:42:19.420086 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.420101 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.420112 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.420121 | orchestrator | 2026-04-07 02:42:19.420131 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 02:42:19.420141 | orchestrator | Tuesday 07 April 2026 02:42:15 +0000 (0:00:00.349) 0:05:26.429 ********* 2026-04-07 02:42:19.420151 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.420161 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.420171 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.420180 | orchestrator | 2026-04-07 02:42:19.420190 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 02:42:19.420200 | orchestrator | Tuesday 07 April 2026 02:42:16 +0000 (0:00:00.695) 0:05:27.125 ********* 2026-04-07 02:42:19.420210 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.420220 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.420230 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.420239 | orchestrator | 2026-04-07 02:42:19.420249 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 02:42:19.420258 | orchestrator | Tuesday 07 April 2026 02:42:16 +0000 (0:00:00.348) 0:05:27.474 ********* 2026-04-07 02:42:19.420268 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.420278 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.420288 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.420298 | orchestrator | 2026-04-07 02:42:19.420313 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 02:42:19.420323 | orchestrator | Tuesday 07 April 2026 02:42:17 +0000 (0:00:00.338) 0:05:27.812 ********* 2026-04-07 02:42:19.420332 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.420342 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.420356 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.420373 | orchestrator | 2026-04-07 02:42:19.420385 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 02:42:19.420394 | orchestrator | Tuesday 07 April 2026 02:42:17 +0000 (0:00:00.351) 0:05:28.164 ********* 2026-04-07 02:42:19.420404 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.420414 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.420423 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.420433 | orchestrator | 2026-04-07 02:42:19.420443 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 02:42:19.420452 | orchestrator | Tuesday 07 April 2026 02:42:18 +0000 (0:00:00.651) 0:05:28.816 ********* 2026-04-07 02:42:19.420463 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:42:19.420479 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:42:19.420495 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:42:19.420511 | orchestrator | 2026-04-07 02:42:19.420535 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 02:42:19.420581 | orchestrator | Tuesday 07 April 2026 02:42:18 +0000 (0:00:00.366) 0:05:29.183 ********* 2026-04-07 02:42:19.420596 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.420613 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.420628 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.420643 | orchestrator | 2026-04-07 02:42:19.420658 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 02:42:19.420674 | orchestrator | Tuesday 07 April 2026 02:42:19 +0000 (0:00:00.398) 0:05:29.581 ********* 2026-04-07 02:42:19.420688 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:42:19.420701 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:42:19.420715 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:42:19.420729 | orchestrator | 2026-04-07 02:42:19.420745 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 02:42:19.420773 | orchestrator | Tuesday 07 April 2026 02:42:19 +0000 (0:00:00.367) 0:05:29.949 ********* 2026-04-07 02:43:23.703073 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:43:23.703165 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:43:23.703174 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:43:23.703182 | orchestrator | 2026-04-07 02:43:23.703190 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-07 02:43:23.703199 | orchestrator | Tuesday 07 April 2026 02:42:20 +0000 (0:00:00.938) 0:05:30.888 ********* 2026-04-07 02:43:23.703206 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 02:43:23.703214 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:43:23.703221 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:43:23.703228 | orchestrator | 2026-04-07 02:43:23.703235 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-07 02:43:23.703242 | orchestrator | Tuesday 07 April 2026 02:42:21 +0000 (0:00:00.758) 0:05:31.647 ********* 2026-04-07 02:43:23.703248 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:43:23.703256 | orchestrator | 2026-04-07 02:43:23.703263 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-07 02:43:23.703270 | orchestrator | Tuesday 07 April 2026 02:42:21 +0000 (0:00:00.862) 0:05:32.510 ********* 2026-04-07 02:43:23.703277 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:43:23.703285 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:43:23.703291 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:43:23.703298 | orchestrator | 2026-04-07 02:43:23.703305 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-07 02:43:23.703329 | orchestrator | Tuesday 07 April 2026 02:42:22 +0000 (0:00:00.725) 0:05:33.235 ********* 2026-04-07 02:43:23.703336 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:43:23.703343 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:43:23.703349 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:43:23.703356 | orchestrator | 2026-04-07 02:43:23.703363 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-07 02:43:23.703371 | orchestrator | Tuesday 07 April 2026 02:42:23 +0000 (0:00:00.374) 0:05:33.610 ********* 2026-04-07 02:43:23.703377 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 02:43:23.703384 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 02:43:23.703391 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 02:43:23.703398 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-07 02:43:23.703404 | orchestrator | 2026-04-07 02:43:23.703423 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-07 02:43:23.703430 | orchestrator | Tuesday 07 April 2026 02:42:34 +0000 (0:00:11.480) 0:05:45.090 ********* 2026-04-07 02:43:23.703436 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:43:23.703443 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:43:23.703450 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:43:23.703456 | orchestrator | 2026-04-07 02:43:23.703524 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-07 02:43:23.703533 | orchestrator | Tuesday 07 April 2026 02:42:34 +0000 (0:00:00.397) 0:05:45.488 ********* 2026-04-07 02:43:23.703540 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-07 02:43:23.703546 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-07 02:43:23.703553 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-07 02:43:23.703581 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-07 02:43:23.703687 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:43:23.703697 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:43:23.703705 | orchestrator | 2026-04-07 02:43:23.703713 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-07 02:43:23.703721 | orchestrator | Tuesday 07 April 2026 02:42:37 +0000 (0:00:02.709) 0:05:48.198 ********* 2026-04-07 02:43:23.703729 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-07 02:43:23.703736 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-07 02:43:23.703743 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-07 02:43:23.703751 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 02:43:23.703758 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-07 02:43:23.703766 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-07 02:43:23.703774 | orchestrator | 2026-04-07 02:43:23.703781 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-07 02:43:23.703789 | orchestrator | Tuesday 07 April 2026 02:42:39 +0000 (0:00:01.366) 0:05:49.565 ********* 2026-04-07 02:43:23.703797 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:43:23.703805 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:43:23.703813 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:43:23.703820 | orchestrator | 2026-04-07 02:43:23.703828 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-07 02:43:23.703836 | orchestrator | Tuesday 07 April 2026 02:42:39 +0000 (0:00:00.720) 0:05:50.285 ********* 2026-04-07 02:43:23.703843 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:43:23.703851 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:43:23.703858 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:43:23.703865 | orchestrator | 2026-04-07 02:43:23.703873 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-07 02:43:23.703880 | orchestrator | Tuesday 07 April 2026 02:42:40 +0000 (0:00:00.370) 0:05:50.655 ********* 2026-04-07 02:43:23.703895 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:43:23.703905 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:43:23.703916 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:43:23.703924 | orchestrator | 2026-04-07 02:43:23.703932 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-07 02:43:23.703939 | orchestrator | Tuesday 07 April 2026 02:42:40 +0000 (0:00:00.664) 0:05:51.320 ********* 2026-04-07 02:43:23.703947 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:43:23.703955 | orchestrator | 2026-04-07 02:43:23.703977 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-07 02:43:23.703986 | orchestrator | Tuesday 07 April 2026 02:42:41 +0000 (0:00:00.644) 0:05:51.965 ********* 2026-04-07 02:43:23.703994 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:43:23.704002 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:43:23.704010 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:43:23.704016 | orchestrator | 2026-04-07 02:43:23.704023 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-07 02:43:23.704029 | orchestrator | Tuesday 07 April 2026 02:42:41 +0000 (0:00:00.388) 0:05:52.353 ********* 2026-04-07 02:43:23.704036 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:43:23.704042 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:43:23.704049 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:43:23.704078 | orchestrator | 2026-04-07 02:43:23.704085 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-07 02:43:23.704092 | orchestrator | Tuesday 07 April 2026 02:42:42 +0000 (0:00:00.735) 0:05:53.089 ********* 2026-04-07 02:43:23.704098 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:43:23.704105 | orchestrator | 2026-04-07 02:43:23.704112 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-07 02:43:23.704118 | orchestrator | Tuesday 07 April 2026 02:42:43 +0000 (0:00:00.644) 0:05:53.733 ********* 2026-04-07 02:43:23.704125 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:43:23.704131 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:43:23.704138 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:43:23.704144 | orchestrator | 2026-04-07 02:43:23.704151 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-07 02:43:23.704157 | orchestrator | Tuesday 07 April 2026 02:42:44 +0000 (0:00:01.346) 0:05:55.079 ********* 2026-04-07 02:43:23.704164 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:43:23.704170 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:43:23.704177 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:43:23.704183 | orchestrator | 2026-04-07 02:43:23.704193 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-07 02:43:23.704204 | orchestrator | Tuesday 07 April 2026 02:42:46 +0000 (0:00:01.682) 0:05:56.762 ********* 2026-04-07 02:43:23.704211 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:43:23.704218 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:43:23.704224 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:43:23.704231 | orchestrator | 2026-04-07 02:43:23.704238 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-07 02:43:23.704250 | orchestrator | Tuesday 07 April 2026 02:42:48 +0000 (0:00:01.824) 0:05:58.587 ********* 2026-04-07 02:43:23.704256 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:43:23.704263 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:43:23.704270 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:43:23.704276 | orchestrator | 2026-04-07 02:43:23.704283 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-07 02:43:23.704289 | orchestrator | Tuesday 07 April 2026 02:42:49 +0000 (0:00:01.901) 0:06:00.489 ********* 2026-04-07 02:43:23.704296 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:43:23.704303 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:43:23.704309 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-07 02:43:23.704321 | orchestrator | 2026-04-07 02:43:23.704327 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-07 02:43:23.704334 | orchestrator | Tuesday 07 April 2026 02:42:50 +0000 (0:00:00.776) 0:06:01.266 ********* 2026-04-07 02:43:23.704340 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-07 02:43:23.704347 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-07 02:43:23.704383 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-07 02:43:23.704390 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-07 02:43:23.704397 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:43:23.704404 | orchestrator | 2026-04-07 02:43:23.704410 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-07 02:43:23.704417 | orchestrator | Tuesday 07 April 2026 02:43:15 +0000 (0:00:24.510) 0:06:25.776 ********* 2026-04-07 02:43:23.704424 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:43:23.704430 | orchestrator | 2026-04-07 02:43:23.704437 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-07 02:43:23.704443 | orchestrator | Tuesday 07 April 2026 02:43:16 +0000 (0:00:01.319) 0:06:27.095 ********* 2026-04-07 02:43:23.704450 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:43:23.704456 | orchestrator | 2026-04-07 02:43:23.704463 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-07 02:43:23.704469 | orchestrator | Tuesday 07 April 2026 02:43:16 +0000 (0:00:00.340) 0:06:27.436 ********* 2026-04-07 02:43:23.704475 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:43:23.704482 | orchestrator | 2026-04-07 02:43:23.704489 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-07 02:43:23.704495 | orchestrator | Tuesday 07 April 2026 02:43:17 +0000 (0:00:00.172) 0:06:27.609 ********* 2026-04-07 02:43:23.704502 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-07 02:43:23.704509 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-07 02:43:23.704515 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-07 02:43:23.704522 | orchestrator | 2026-04-07 02:43:23.704528 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-07 02:43:23.704539 | orchestrator | Tuesday 07 April 2026 02:43:23 +0000 (0:00:06.625) 0:06:34.235 ********* 2026-04-07 02:43:47.798554 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-07 02:43:47.798672 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-07 02:43:47.798683 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-07 02:43:47.798690 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-07 02:43:47.798697 | orchestrator | 2026-04-07 02:43:47.798704 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 02:43:47.798712 | orchestrator | Tuesday 07 April 2026 02:43:29 +0000 (0:00:05.375) 0:06:39.610 ********* 2026-04-07 02:43:47.798718 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:43:47.798725 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:43:47.798732 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:43:47.798738 | orchestrator | 2026-04-07 02:43:47.798745 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-07 02:43:47.798751 | orchestrator | Tuesday 07 April 2026 02:43:29 +0000 (0:00:00.754) 0:06:40.365 ********* 2026-04-07 02:43:47.798758 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:43:47.798764 | orchestrator | 2026-04-07 02:43:47.798788 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-07 02:43:47.798794 | orchestrator | Tuesday 07 April 2026 02:43:30 +0000 (0:00:00.607) 0:06:40.973 ********* 2026-04-07 02:43:47.798800 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:43:47.798807 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:43:47.798813 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:43:47.798819 | orchestrator | 2026-04-07 02:43:47.798825 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-07 02:43:47.798831 | orchestrator | Tuesday 07 April 2026 02:43:31 +0000 (0:00:00.687) 0:06:41.660 ********* 2026-04-07 02:43:47.798837 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:43:47.798844 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:43:47.798850 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:43:47.798856 | orchestrator | 2026-04-07 02:43:47.798862 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-07 02:43:47.798868 | orchestrator | Tuesday 07 April 2026 02:43:32 +0000 (0:00:01.193) 0:06:42.854 ********* 2026-04-07 02:43:47.798875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 02:43:47.798881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 02:43:47.798899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 02:43:47.798905 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:43:47.798911 | orchestrator | 2026-04-07 02:43:47.798918 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-07 02:43:47.798924 | orchestrator | Tuesday 07 April 2026 02:43:33 +0000 (0:00:00.741) 0:06:43.595 ********* 2026-04-07 02:43:47.798930 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:43:47.798936 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:43:47.798942 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:43:47.798948 | orchestrator | 2026-04-07 02:43:47.798955 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-07 02:43:47.798961 | orchestrator | 2026-04-07 02:43:47.798968 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 02:43:47.798974 | orchestrator | Tuesday 07 April 2026 02:43:33 +0000 (0:00:00.702) 0:06:44.298 ********* 2026-04-07 02:43:47.798981 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:43:47.798989 | orchestrator | 2026-04-07 02:43:47.798995 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 02:43:47.799001 | orchestrator | Tuesday 07 April 2026 02:43:34 +0000 (0:00:01.169) 0:06:45.467 ********* 2026-04-07 02:43:47.799007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:43:47.799013 | orchestrator | 2026-04-07 02:43:47.799020 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 02:43:47.799026 | orchestrator | Tuesday 07 April 2026 02:43:35 +0000 (0:00:00.960) 0:06:46.428 ********* 2026-04-07 02:43:47.799032 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.799038 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.799044 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.799050 | orchestrator | 2026-04-07 02:43:47.799056 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 02:43:47.799062 | orchestrator | Tuesday 07 April 2026 02:43:36 +0000 (0:00:00.417) 0:06:46.846 ********* 2026-04-07 02:43:47.799069 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.799075 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.799081 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.799087 | orchestrator | 2026-04-07 02:43:47.799095 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 02:43:47.799106 | orchestrator | Tuesday 07 April 2026 02:43:37 +0000 (0:00:00.720) 0:06:47.566 ********* 2026-04-07 02:43:47.799115 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.799126 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.799142 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.799153 | orchestrator | 2026-04-07 02:43:47.799164 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 02:43:47.799209 | orchestrator | Tuesday 07 April 2026 02:43:37 +0000 (0:00:00.739) 0:06:48.305 ********* 2026-04-07 02:43:47.799220 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.799231 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.799255 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.799264 | orchestrator | 2026-04-07 02:43:47.799275 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 02:43:47.799285 | orchestrator | Tuesday 07 April 2026 02:43:38 +0000 (0:00:01.099) 0:06:49.405 ********* 2026-04-07 02:43:47.799296 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.799307 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.799317 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.799327 | orchestrator | 2026-04-07 02:43:47.799353 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 02:43:47.799363 | orchestrator | Tuesday 07 April 2026 02:43:39 +0000 (0:00:00.404) 0:06:49.810 ********* 2026-04-07 02:43:47.799373 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.799383 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.799393 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.799403 | orchestrator | 2026-04-07 02:43:47.799413 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 02:43:47.799424 | orchestrator | Tuesday 07 April 2026 02:43:39 +0000 (0:00:00.390) 0:06:50.201 ********* 2026-04-07 02:43:47.799434 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.799444 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.799454 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.799465 | orchestrator | 2026-04-07 02:43:47.799475 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 02:43:47.799485 | orchestrator | Tuesday 07 April 2026 02:43:40 +0000 (0:00:00.365) 0:06:50.567 ********* 2026-04-07 02:43:47.799496 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.799506 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.799516 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.799527 | orchestrator | 2026-04-07 02:43:47.799537 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 02:43:47.799547 | orchestrator | Tuesday 07 April 2026 02:43:41 +0000 (0:00:01.608) 0:06:52.175 ********* 2026-04-07 02:43:47.799558 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.799586 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.799597 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.799608 | orchestrator | 2026-04-07 02:43:47.799618 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 02:43:47.799629 | orchestrator | Tuesday 07 April 2026 02:43:42 +0000 (0:00:00.772) 0:06:52.947 ********* 2026-04-07 02:43:47.799639 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.799650 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.799660 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.799670 | orchestrator | 2026-04-07 02:43:47.799681 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 02:43:47.799691 | orchestrator | Tuesday 07 April 2026 02:43:42 +0000 (0:00:00.383) 0:06:53.331 ********* 2026-04-07 02:43:47.799702 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.799712 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.799723 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.799732 | orchestrator | 2026-04-07 02:43:47.799742 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 02:43:47.799762 | orchestrator | Tuesday 07 April 2026 02:43:43 +0000 (0:00:00.378) 0:06:53.710 ********* 2026-04-07 02:43:47.799773 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.799787 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.799799 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.799811 | orchestrator | 2026-04-07 02:43:47.799829 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 02:43:47.799841 | orchestrator | Tuesday 07 April 2026 02:43:43 +0000 (0:00:00.708) 0:06:54.418 ********* 2026-04-07 02:43:47.799852 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.799863 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.799874 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.799884 | orchestrator | 2026-04-07 02:43:47.799895 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 02:43:47.799906 | orchestrator | Tuesday 07 April 2026 02:43:44 +0000 (0:00:00.381) 0:06:54.800 ********* 2026-04-07 02:43:47.799917 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.799925 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.799931 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.799937 | orchestrator | 2026-04-07 02:43:47.799943 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 02:43:47.799949 | orchestrator | Tuesday 07 April 2026 02:43:44 +0000 (0:00:00.413) 0:06:55.213 ********* 2026-04-07 02:43:47.799955 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.799962 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.799968 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.799974 | orchestrator | 2026-04-07 02:43:47.799980 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 02:43:47.799986 | orchestrator | Tuesday 07 April 2026 02:43:45 +0000 (0:00:00.375) 0:06:55.589 ********* 2026-04-07 02:43:47.799992 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.799999 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.800005 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.800011 | orchestrator | 2026-04-07 02:43:47.800017 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 02:43:47.800023 | orchestrator | Tuesday 07 April 2026 02:43:45 +0000 (0:00:00.717) 0:06:56.307 ********* 2026-04-07 02:43:47.800029 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:43:47.800036 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:43:47.800042 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:43:47.800048 | orchestrator | 2026-04-07 02:43:47.800054 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 02:43:47.800060 | orchestrator | Tuesday 07 April 2026 02:43:46 +0000 (0:00:00.361) 0:06:56.668 ********* 2026-04-07 02:43:47.800066 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.800072 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.800079 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.800085 | orchestrator | 2026-04-07 02:43:47.800091 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 02:43:47.800097 | orchestrator | Tuesday 07 April 2026 02:43:46 +0000 (0:00:00.373) 0:06:57.042 ********* 2026-04-07 02:43:47.800103 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.800109 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.800115 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.800121 | orchestrator | 2026-04-07 02:43:47.800128 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-07 02:43:47.800134 | orchestrator | Tuesday 07 April 2026 02:43:47 +0000 (0:00:00.888) 0:06:57.931 ********* 2026-04-07 02:43:47.800140 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:43:47.800146 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:43:47.800152 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:43:47.800158 | orchestrator | 2026-04-07 02:43:47.800164 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-07 02:43:47.800177 | orchestrator | Tuesday 07 April 2026 02:43:47 +0000 (0:00:00.396) 0:06:58.327 ********* 2026-04-07 02:44:50.315347 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:44:50.315426 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:44:50.315432 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:44:50.315453 | orchestrator | 2026-04-07 02:44:50.315458 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-07 02:44:50.315463 | orchestrator | Tuesday 07 April 2026 02:43:48 +0000 (0:00:00.705) 0:06:59.033 ********* 2026-04-07 02:44:50.315467 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:44:50.315471 | orchestrator | 2026-04-07 02:44:50.315475 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-07 02:44:50.315479 | orchestrator | Tuesday 07 April 2026 02:43:49 +0000 (0:00:00.911) 0:06:59.944 ********* 2026-04-07 02:44:50.315483 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:44:50.315489 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:44:50.315492 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:44:50.315496 | orchestrator | 2026-04-07 02:44:50.315500 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-07 02:44:50.315504 | orchestrator | Tuesday 07 April 2026 02:43:49 +0000 (0:00:00.367) 0:07:00.312 ********* 2026-04-07 02:44:50.315508 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:44:50.315511 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:44:50.315515 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:44:50.315519 | orchestrator | 2026-04-07 02:44:50.315523 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-07 02:44:50.315526 | orchestrator | Tuesday 07 April 2026 02:43:50 +0000 (0:00:00.385) 0:07:00.698 ********* 2026-04-07 02:44:50.315530 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:44:50.315535 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:44:50.315538 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:44:50.315542 | orchestrator | 2026-04-07 02:44:50.315546 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-07 02:44:50.315550 | orchestrator | Tuesday 07 April 2026 02:43:50 +0000 (0:00:00.668) 0:07:01.366 ********* 2026-04-07 02:44:50.315553 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:44:50.315557 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:44:50.315561 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:44:50.315565 | orchestrator | 2026-04-07 02:44:50.315626 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-07 02:44:50.315631 | orchestrator | Tuesday 07 April 2026 02:43:51 +0000 (0:00:00.694) 0:07:02.061 ********* 2026-04-07 02:44:50.315635 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-07 02:44:50.315641 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-07 02:44:50.315644 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-07 02:44:50.315648 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-07 02:44:50.315653 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-07 02:44:50.315656 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-07 02:44:50.315660 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-07 02:44:50.315664 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-07 02:44:50.315668 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-07 02:44:50.315672 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-07 02:44:50.315676 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-07 02:44:50.315680 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-07 02:44:50.315683 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-07 02:44:50.315687 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-07 02:44:50.315695 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-07 02:44:50.315699 | orchestrator | 2026-04-07 02:44:50.315702 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-07 02:44:50.315706 | orchestrator | Tuesday 07 April 2026 02:43:55 +0000 (0:00:04.244) 0:07:06.305 ********* 2026-04-07 02:44:50.315710 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:44:50.315714 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:44:50.315718 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:44:50.315721 | orchestrator | 2026-04-07 02:44:50.315725 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-07 02:44:50.315729 | orchestrator | Tuesday 07 April 2026 02:43:56 +0000 (0:00:00.405) 0:07:06.711 ********* 2026-04-07 02:44:50.315733 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:44:50.315736 | orchestrator | 2026-04-07 02:44:50.315740 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-07 02:44:50.315744 | orchestrator | Tuesday 07 April 2026 02:43:57 +0000 (0:00:00.925) 0:07:07.636 ********* 2026-04-07 02:44:50.315748 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-07 02:44:50.315751 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-07 02:44:50.315765 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-07 02:44:50.315769 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-07 02:44:50.315773 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-07 02:44:50.315777 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-07 02:44:50.315781 | orchestrator | 2026-04-07 02:44:50.315785 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-07 02:44:50.315789 | orchestrator | Tuesday 07 April 2026 02:43:58 +0000 (0:00:01.130) 0:07:08.767 ********* 2026-04-07 02:44:50.315792 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:44:50.315796 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 02:44:50.315800 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 02:44:50.315804 | orchestrator | 2026-04-07 02:44:50.315808 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-07 02:44:50.315811 | orchestrator | Tuesday 07 April 2026 02:44:00 +0000 (0:00:02.387) 0:07:11.155 ********* 2026-04-07 02:44:50.315815 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 02:44:50.315819 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 02:44:50.315823 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:44:50.315827 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 02:44:50.315831 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-07 02:44:50.315834 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:44:50.315838 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 02:44:50.315842 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-07 02:44:50.315846 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:44:50.315849 | orchestrator | 2026-04-07 02:44:50.315853 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-07 02:44:50.315857 | orchestrator | Tuesday 07 April 2026 02:44:01 +0000 (0:00:01.300) 0:07:12.456 ********* 2026-04-07 02:44:50.315861 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:44:50.315864 | orchestrator | 2026-04-07 02:44:50.315868 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-07 02:44:50.315872 | orchestrator | Tuesday 07 April 2026 02:44:04 +0000 (0:00:02.124) 0:07:14.580 ********* 2026-04-07 02:44:50.315879 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:44:50.315883 | orchestrator | 2026-04-07 02:44:50.315891 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-07 02:44:50.315897 | orchestrator | Tuesday 07 April 2026 02:44:05 +0000 (0:00:01.126) 0:07:15.706 ********* 2026-04-07 02:44:50.315904 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}) 2026-04-07 02:44:50.315911 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}) 2026-04-07 02:44:50.315917 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}) 2026-04-07 02:44:50.315923 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}) 2026-04-07 02:44:50.315929 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}) 2026-04-07 02:44:50.315935 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}) 2026-04-07 02:44:50.315941 | orchestrator | 2026-04-07 02:44:50.315947 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-07 02:44:50.315953 | orchestrator | Tuesday 07 April 2026 02:44:44 +0000 (0:00:39.817) 0:07:55.524 ********* 2026-04-07 02:44:50.315959 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:44:50.315965 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:44:50.315970 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:44:50.315976 | orchestrator | 2026-04-07 02:44:50.315992 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-07 02:44:50.315998 | orchestrator | Tuesday 07 April 2026 02:44:45 +0000 (0:00:00.399) 0:07:55.924 ********* 2026-04-07 02:44:50.316005 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:44:50.316018 | orchestrator | 2026-04-07 02:44:50.316025 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-07 02:44:50.316031 | orchestrator | Tuesday 07 April 2026 02:44:46 +0000 (0:00:00.927) 0:07:56.851 ********* 2026-04-07 02:44:50.316037 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:44:50.316044 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:44:50.316050 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:44:50.316057 | orchestrator | 2026-04-07 02:44:50.316064 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-07 02:44:50.316070 | orchestrator | Tuesday 07 April 2026 02:44:46 +0000 (0:00:00.657) 0:07:57.509 ********* 2026-04-07 02:44:50.316076 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:44:50.316082 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:44:50.316087 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:44:50.316093 | orchestrator | 2026-04-07 02:44:50.316098 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-07 02:44:50.316104 | orchestrator | Tuesday 07 April 2026 02:44:49 +0000 (0:00:02.471) 0:07:59.980 ********* 2026-04-07 02:44:50.316116 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:45:28.935283 | orchestrator | 2026-04-07 02:45:28.935395 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-07 02:45:28.935414 | orchestrator | Tuesday 07 April 2026 02:44:50 +0000 (0:00:00.868) 0:08:00.848 ********* 2026-04-07 02:45:28.935427 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:45:28.935440 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:45:28.935451 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:45:28.935462 | orchestrator | 2026-04-07 02:45:28.935473 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-07 02:45:28.935484 | orchestrator | Tuesday 07 April 2026 02:44:51 +0000 (0:00:01.230) 0:08:02.078 ********* 2026-04-07 02:45:28.935521 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:45:28.935532 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:45:28.935543 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:45:28.935554 | orchestrator | 2026-04-07 02:45:28.935565 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-07 02:45:28.935576 | orchestrator | Tuesday 07 April 2026 02:44:52 +0000 (0:00:01.222) 0:08:03.301 ********* 2026-04-07 02:45:28.935640 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:45:28.935652 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:45:28.935663 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:45:28.935674 | orchestrator | 2026-04-07 02:45:28.935685 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-07 02:45:28.935695 | orchestrator | Tuesday 07 April 2026 02:44:54 +0000 (0:00:02.151) 0:08:05.452 ********* 2026-04-07 02:45:28.935706 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.935717 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.935728 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:28.935752 | orchestrator | 2026-04-07 02:45:28.935774 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-07 02:45:28.935786 | orchestrator | Tuesday 07 April 2026 02:44:55 +0000 (0:00:00.367) 0:08:05.820 ********* 2026-04-07 02:45:28.935797 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.935807 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.935821 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:28.935833 | orchestrator | 2026-04-07 02:45:28.935846 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-07 02:45:28.935859 | orchestrator | Tuesday 07 April 2026 02:44:55 +0000 (0:00:00.385) 0:08:06.205 ********* 2026-04-07 02:45:28.935872 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-07 02:45:28.935901 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-07 02:45:28.935914 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-04-07 02:45:28.935926 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 02:45:28.935939 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-07 02:45:28.935952 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-07 02:45:28.935964 | orchestrator | 2026-04-07 02:45:28.935977 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-07 02:45:28.935991 | orchestrator | Tuesday 07 April 2026 02:44:56 +0000 (0:00:01.060) 0:08:07.266 ********* 2026-04-07 02:45:28.936004 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-07 02:45:28.936017 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-07 02:45:28.936029 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-07 02:45:28.936042 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-07 02:45:28.936055 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-07 02:45:28.936067 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-07 02:45:28.936079 | orchestrator | 2026-04-07 02:45:28.936091 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-07 02:45:28.936104 | orchestrator | Tuesday 07 April 2026 02:44:59 +0000 (0:00:02.505) 0:08:09.772 ********* 2026-04-07 02:45:28.936117 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-07 02:45:28.936130 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-07 02:45:28.936142 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-07 02:45:28.936155 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-07 02:45:28.936167 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-07 02:45:28.936180 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-07 02:45:28.936192 | orchestrator | 2026-04-07 02:45:28.936204 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-07 02:45:28.936215 | orchestrator | Tuesday 07 April 2026 02:45:02 +0000 (0:00:03.433) 0:08:13.205 ********* 2026-04-07 02:45:28.936225 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936236 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.936256 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:45:28.936267 | orchestrator | 2026-04-07 02:45:28.936279 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-07 02:45:28.936289 | orchestrator | Tuesday 07 April 2026 02:45:05 +0000 (0:00:02.475) 0:08:15.681 ********* 2026-04-07 02:45:28.936300 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936311 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.936322 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-07 02:45:28.936334 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:45:28.936345 | orchestrator | 2026-04-07 02:45:28.936355 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-07 02:45:28.936366 | orchestrator | Tuesday 07 April 2026 02:45:17 +0000 (0:00:12.623) 0:08:28.304 ********* 2026-04-07 02:45:28.936377 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936396 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.936414 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:28.936432 | orchestrator | 2026-04-07 02:45:28.936451 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 02:45:28.936469 | orchestrator | Tuesday 07 April 2026 02:45:19 +0000 (0:00:01.341) 0:08:29.646 ********* 2026-04-07 02:45:28.936486 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936503 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.936520 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:28.936538 | orchestrator | 2026-04-07 02:45:28.936556 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-07 02:45:28.936647 | orchestrator | Tuesday 07 April 2026 02:45:19 +0000 (0:00:00.368) 0:08:30.014 ********* 2026-04-07 02:45:28.936669 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:45:28.936688 | orchestrator | 2026-04-07 02:45:28.936705 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-07 02:45:28.936716 | orchestrator | Tuesday 07 April 2026 02:45:20 +0000 (0:00:00.941) 0:08:30.956 ********* 2026-04-07 02:45:28.936727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:45:28.936738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:45:28.936748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:45:28.936759 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936770 | orchestrator | 2026-04-07 02:45:28.936780 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-07 02:45:28.936791 | orchestrator | Tuesday 07 April 2026 02:45:20 +0000 (0:00:00.428) 0:08:31.384 ********* 2026-04-07 02:45:28.936801 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936812 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.936822 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:28.936833 | orchestrator | 2026-04-07 02:45:28.936844 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-07 02:45:28.936854 | orchestrator | Tuesday 07 April 2026 02:45:21 +0000 (0:00:00.423) 0:08:31.807 ********* 2026-04-07 02:45:28.936865 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936876 | orchestrator | 2026-04-07 02:45:28.936886 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-07 02:45:28.936897 | orchestrator | Tuesday 07 April 2026 02:45:21 +0000 (0:00:00.242) 0:08:32.050 ********* 2026-04-07 02:45:28.936907 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936918 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.936928 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:28.936939 | orchestrator | 2026-04-07 02:45:28.936949 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-07 02:45:28.936960 | orchestrator | Tuesday 07 April 2026 02:45:22 +0000 (0:00:00.668) 0:08:32.719 ********* 2026-04-07 02:45:28.936981 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.936992 | orchestrator | 2026-04-07 02:45:28.937010 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-07 02:45:28.937021 | orchestrator | Tuesday 07 April 2026 02:45:22 +0000 (0:00:00.271) 0:08:32.991 ********* 2026-04-07 02:45:28.937032 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.937043 | orchestrator | 2026-04-07 02:45:28.937053 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-07 02:45:28.937064 | orchestrator | Tuesday 07 April 2026 02:45:22 +0000 (0:00:00.285) 0:08:33.276 ********* 2026-04-07 02:45:28.937075 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.937086 | orchestrator | 2026-04-07 02:45:28.937096 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-07 02:45:28.937107 | orchestrator | Tuesday 07 April 2026 02:45:22 +0000 (0:00:00.152) 0:08:33.428 ********* 2026-04-07 02:45:28.937118 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.937128 | orchestrator | 2026-04-07 02:45:28.937139 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-07 02:45:28.937149 | orchestrator | Tuesday 07 April 2026 02:45:23 +0000 (0:00:00.253) 0:08:33.682 ********* 2026-04-07 02:45:28.937160 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.937170 | orchestrator | 2026-04-07 02:45:28.937181 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-07 02:45:28.937192 | orchestrator | Tuesday 07 April 2026 02:45:23 +0000 (0:00:00.247) 0:08:33.930 ********* 2026-04-07 02:45:28.937202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:45:28.937213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:45:28.937224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:45:28.937235 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.937245 | orchestrator | 2026-04-07 02:45:28.937256 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-07 02:45:28.937267 | orchestrator | Tuesday 07 April 2026 02:45:23 +0000 (0:00:00.473) 0:08:34.404 ********* 2026-04-07 02:45:28.937277 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.937288 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:28.937299 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:28.937309 | orchestrator | 2026-04-07 02:45:28.937320 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-07 02:45:28.937331 | orchestrator | Tuesday 07 April 2026 02:45:24 +0000 (0:00:00.385) 0:08:34.789 ********* 2026-04-07 02:45:28.937341 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.937352 | orchestrator | 2026-04-07 02:45:28.937363 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-07 02:45:28.937373 | orchestrator | Tuesday 07 April 2026 02:45:24 +0000 (0:00:00.244) 0:08:35.033 ********* 2026-04-07 02:45:28.937384 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:28.937394 | orchestrator | 2026-04-07 02:45:28.937405 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-07 02:45:28.937415 | orchestrator | 2026-04-07 02:45:28.937426 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 02:45:28.937437 | orchestrator | Tuesday 07 April 2026 02:45:26 +0000 (0:00:01.576) 0:08:36.610 ********* 2026-04-07 02:45:28.937448 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:45:28.937461 | orchestrator | 2026-04-07 02:45:28.937472 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 02:45:28.937482 | orchestrator | Tuesday 07 April 2026 02:45:27 +0000 (0:00:01.430) 0:08:38.040 ********* 2026-04-07 02:45:28.937502 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:45:58.255301 | orchestrator | 2026-04-07 02:45:58.255423 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 02:45:58.255437 | orchestrator | Tuesday 07 April 2026 02:45:28 +0000 (0:00:01.424) 0:08:39.464 ********* 2026-04-07 02:45:58.255445 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.255454 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.255460 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.255515 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.255525 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:45:58.255532 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:45:58.255538 | orchestrator | 2026-04-07 02:45:58.255545 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 02:45:58.255552 | orchestrator | Tuesday 07 April 2026 02:45:30 +0000 (0:00:01.481) 0:08:40.946 ********* 2026-04-07 02:45:58.255558 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.255565 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.255571 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.255578 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.255584 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.255651 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.255659 | orchestrator | 2026-04-07 02:45:58.255666 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 02:45:58.255673 | orchestrator | Tuesday 07 April 2026 02:45:31 +0000 (0:00:00.863) 0:08:41.809 ********* 2026-04-07 02:45:58.255679 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.255686 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.255692 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.255699 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.255705 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.255711 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.255718 | orchestrator | 2026-04-07 02:45:58.255724 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 02:45:58.255731 | orchestrator | Tuesday 07 April 2026 02:45:32 +0000 (0:00:01.000) 0:08:42.809 ********* 2026-04-07 02:45:58.255737 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.255743 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.255749 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.255755 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.255762 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.255768 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.255774 | orchestrator | 2026-04-07 02:45:58.255793 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 02:45:58.255800 | orchestrator | Tuesday 07 April 2026 02:45:33 +0000 (0:00:00.826) 0:08:43.636 ********* 2026-04-07 02:45:58.255806 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.255813 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.255819 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.255825 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.255831 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:45:58.255839 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:45:58.255846 | orchestrator | 2026-04-07 02:45:58.255854 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 02:45:58.255861 | orchestrator | Tuesday 07 April 2026 02:45:34 +0000 (0:00:01.504) 0:08:45.140 ********* 2026-04-07 02:45:58.255868 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.255875 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.255882 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.255889 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.255896 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.255904 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.255911 | orchestrator | 2026-04-07 02:45:58.255918 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 02:45:58.255925 | orchestrator | Tuesday 07 April 2026 02:45:35 +0000 (0:00:00.756) 0:08:45.896 ********* 2026-04-07 02:45:58.255933 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.255958 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.255965 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.255972 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.255980 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.255987 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.255994 | orchestrator | 2026-04-07 02:45:58.256001 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 02:45:58.256008 | orchestrator | Tuesday 07 April 2026 02:45:36 +0000 (0:00:00.963) 0:08:46.860 ********* 2026-04-07 02:45:58.256015 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.256023 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.256031 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.256038 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.256045 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:45:58.256052 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:45:58.256059 | orchestrator | 2026-04-07 02:45:58.256066 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 02:45:58.256074 | orchestrator | Tuesday 07 April 2026 02:45:37 +0000 (0:00:01.100) 0:08:47.960 ********* 2026-04-07 02:45:58.256081 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.256088 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.256095 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.256102 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.256109 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:45:58.256116 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:45:58.256123 | orchestrator | 2026-04-07 02:45:58.256130 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 02:45:58.256138 | orchestrator | Tuesday 07 April 2026 02:45:38 +0000 (0:00:01.468) 0:08:49.428 ********* 2026-04-07 02:45:58.256146 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.256154 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.256161 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.256168 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.256176 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.256183 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.256191 | orchestrator | 2026-04-07 02:45:58.256198 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 02:45:58.256205 | orchestrator | Tuesday 07 April 2026 02:45:39 +0000 (0:00:00.853) 0:08:50.282 ********* 2026-04-07 02:45:58.256212 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.256220 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.256226 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.256232 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.256238 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:45:58.256258 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:45:58.256266 | orchestrator | 2026-04-07 02:45:58.256272 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 02:45:58.256278 | orchestrator | Tuesday 07 April 2026 02:45:40 +0000 (0:00:01.041) 0:08:51.323 ********* 2026-04-07 02:45:58.256284 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.256290 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.256296 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.256303 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.256309 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.256315 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.256321 | orchestrator | 2026-04-07 02:45:58.256327 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 02:45:58.256333 | orchestrator | Tuesday 07 April 2026 02:45:41 +0000 (0:00:00.739) 0:08:52.063 ********* 2026-04-07 02:45:58.256340 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.256346 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.256352 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.256358 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.256364 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.256375 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.256381 | orchestrator | 2026-04-07 02:45:58.256388 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 02:45:58.256394 | orchestrator | Tuesday 07 April 2026 02:45:42 +0000 (0:00:01.026) 0:08:53.090 ********* 2026-04-07 02:45:58.256400 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.256406 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.256412 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.256418 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.256425 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.256431 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.256447 | orchestrator | 2026-04-07 02:45:58.256454 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 02:45:58.256460 | orchestrator | Tuesday 07 April 2026 02:45:43 +0000 (0:00:00.681) 0:08:53.771 ********* 2026-04-07 02:45:58.256466 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.256472 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.256486 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.256492 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.256498 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.256504 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.256511 | orchestrator | 2026-04-07 02:45:58.256517 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 02:45:58.256523 | orchestrator | Tuesday 07 April 2026 02:45:44 +0000 (0:00:00.983) 0:08:54.754 ********* 2026-04-07 02:45:58.256529 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.256536 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.256542 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.256548 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:45:58.256554 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:45:58.256560 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:45:58.256566 | orchestrator | 2026-04-07 02:45:58.256572 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 02:45:58.256578 | orchestrator | Tuesday 07 April 2026 02:45:44 +0000 (0:00:00.690) 0:08:55.444 ********* 2026-04-07 02:45:58.256584 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:45:58.256603 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:45:58.256609 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:45:58.256615 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.256621 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:45:58.256628 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:45:58.256634 | orchestrator | 2026-04-07 02:45:58.256641 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 02:45:58.256647 | orchestrator | Tuesday 07 April 2026 02:45:45 +0000 (0:00:01.062) 0:08:56.507 ********* 2026-04-07 02:45:58.256653 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.256659 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.256665 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.256697 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.256703 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:45:58.256710 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:45:58.256716 | orchestrator | 2026-04-07 02:45:58.256722 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 02:45:58.256728 | orchestrator | Tuesday 07 April 2026 02:45:46 +0000 (0:00:00.685) 0:08:57.193 ********* 2026-04-07 02:45:58.256734 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:45:58.256741 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:45:58.256747 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:45:58.256753 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.256759 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:45:58.256765 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:45:58.256771 | orchestrator | 2026-04-07 02:45:58.256777 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-07 02:45:58.256784 | orchestrator | Tuesday 07 April 2026 02:45:48 +0000 (0:00:01.499) 0:08:58.692 ********* 2026-04-07 02:45:58.256795 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:45:58.256801 | orchestrator | 2026-04-07 02:45:58.256808 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-07 02:45:58.256814 | orchestrator | Tuesday 07 April 2026 02:45:52 +0000 (0:00:04.330) 0:09:03.023 ********* 2026-04-07 02:45:58.256820 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:45:58.256827 | orchestrator | 2026-04-07 02:45:58.256833 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-07 02:45:58.256839 | orchestrator | Tuesday 07 April 2026 02:45:55 +0000 (0:00:02.868) 0:09:05.891 ********* 2026-04-07 02:45:58.256845 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:45:58.256851 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:45:58.256857 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:45:58.256864 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:45:58.256870 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:45:58.256876 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:45:58.256882 | orchestrator | 2026-04-07 02:45:58.256888 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-07 02:45:58.256895 | orchestrator | Tuesday 07 April 2026 02:45:56 +0000 (0:00:01.520) 0:09:07.412 ********* 2026-04-07 02:45:58.256901 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:45:58.256907 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:45:58.256918 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:46:23.294112 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:46:23.294203 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:46:23.294214 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:46:23.294222 | orchestrator | 2026-04-07 02:46:23.294230 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-07 02:46:23.294239 | orchestrator | Tuesday 07 April 2026 02:45:58 +0000 (0:00:01.372) 0:09:08.784 ********* 2026-04-07 02:46:23.294247 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:46:23.294255 | orchestrator | 2026-04-07 02:46:23.294274 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-07 02:46:23.294288 | orchestrator | Tuesday 07 April 2026 02:45:59 +0000 (0:00:01.446) 0:09:10.230 ********* 2026-04-07 02:46:23.294295 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:46:23.294302 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:46:23.294309 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:46:23.294316 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:46:23.294322 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:46:23.294329 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:46:23.294336 | orchestrator | 2026-04-07 02:46:23.294343 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-07 02:46:23.294350 | orchestrator | Tuesday 07 April 2026 02:46:01 +0000 (0:00:01.626) 0:09:11.857 ********* 2026-04-07 02:46:23.294356 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:46:23.294363 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:46:23.294370 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:46:23.294377 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:46:23.294383 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:46:23.294390 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:46:23.294396 | orchestrator | 2026-04-07 02:46:23.294403 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-07 02:46:23.294410 | orchestrator | Tuesday 07 April 2026 02:46:05 +0000 (0:00:03.986) 0:09:15.844 ********* 2026-04-07 02:46:23.294430 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:46:23.294438 | orchestrator | 2026-04-07 02:46:23.294444 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-07 02:46:23.294468 | orchestrator | Tuesday 07 April 2026 02:46:06 +0000 (0:00:01.531) 0:09:17.375 ********* 2026-04-07 02:46:23.294475 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.294483 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.294489 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.294496 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:46:23.294502 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:46:23.294509 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:46:23.294515 | orchestrator | 2026-04-07 02:46:23.294522 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-07 02:46:23.294529 | orchestrator | Tuesday 07 April 2026 02:46:07 +0000 (0:00:00.768) 0:09:18.143 ********* 2026-04-07 02:46:23.294535 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:46:23.294542 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:46:23.294548 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:46:23.294555 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:46:23.294562 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:46:23.294568 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:46:23.294574 | orchestrator | 2026-04-07 02:46:23.294581 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-07 02:46:23.294588 | orchestrator | Tuesday 07 April 2026 02:46:10 +0000 (0:00:02.705) 0:09:20.849 ********* 2026-04-07 02:46:23.294640 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.294649 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.294656 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.294664 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:46:23.294672 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:46:23.294679 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:46:23.294687 | orchestrator | 2026-04-07 02:46:23.294695 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-07 02:46:23.294703 | orchestrator | 2026-04-07 02:46:23.294711 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 02:46:23.294719 | orchestrator | Tuesday 07 April 2026 02:46:11 +0000 (0:00:01.002) 0:09:21.851 ********* 2026-04-07 02:46:23.294727 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:46:23.294735 | orchestrator | 2026-04-07 02:46:23.294742 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 02:46:23.294750 | orchestrator | Tuesday 07 April 2026 02:46:12 +0000 (0:00:00.882) 0:09:22.734 ********* 2026-04-07 02:46:23.294756 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:46:23.294763 | orchestrator | 2026-04-07 02:46:23.294769 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 02:46:23.294776 | orchestrator | Tuesday 07 April 2026 02:46:12 +0000 (0:00:00.564) 0:09:23.299 ********* 2026-04-07 02:46:23.294783 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.294789 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.294796 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.294802 | orchestrator | 2026-04-07 02:46:23.294809 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 02:46:23.294816 | orchestrator | Tuesday 07 April 2026 02:46:13 +0000 (0:00:00.654) 0:09:23.953 ********* 2026-04-07 02:46:23.294822 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.294829 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.294835 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.294842 | orchestrator | 2026-04-07 02:46:23.294848 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 02:46:23.294855 | orchestrator | Tuesday 07 April 2026 02:46:14 +0000 (0:00:00.765) 0:09:24.719 ********* 2026-04-07 02:46:23.294862 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.294868 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.294888 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.294895 | orchestrator | 2026-04-07 02:46:23.294902 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 02:46:23.294914 | orchestrator | Tuesday 07 April 2026 02:46:14 +0000 (0:00:00.795) 0:09:25.515 ********* 2026-04-07 02:46:23.294921 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.294927 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.294934 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.294948 | orchestrator | 2026-04-07 02:46:23.294955 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 02:46:23.294961 | orchestrator | Tuesday 07 April 2026 02:46:16 +0000 (0:00:01.122) 0:09:26.637 ********* 2026-04-07 02:46:23.294968 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.294974 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.294981 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.294987 | orchestrator | 2026-04-07 02:46:23.294994 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 02:46:23.295000 | orchestrator | Tuesday 07 April 2026 02:46:16 +0000 (0:00:00.353) 0:09:26.991 ********* 2026-04-07 02:46:23.295007 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.295013 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.295020 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.295026 | orchestrator | 2026-04-07 02:46:23.295033 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 02:46:23.295039 | orchestrator | Tuesday 07 April 2026 02:46:16 +0000 (0:00:00.411) 0:09:27.403 ********* 2026-04-07 02:46:23.295046 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.295053 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.295059 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.295066 | orchestrator | 2026-04-07 02:46:23.295072 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 02:46:23.295079 | orchestrator | Tuesday 07 April 2026 02:46:17 +0000 (0:00:00.514) 0:09:27.917 ********* 2026-04-07 02:46:23.295085 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.295092 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.295099 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.295105 | orchestrator | 2026-04-07 02:46:23.295112 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 02:46:23.295122 | orchestrator | Tuesday 07 April 2026 02:46:18 +0000 (0:00:01.060) 0:09:28.977 ********* 2026-04-07 02:46:23.295129 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.295136 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.295142 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.295149 | orchestrator | 2026-04-07 02:46:23.295155 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 02:46:23.295162 | orchestrator | Tuesday 07 April 2026 02:46:19 +0000 (0:00:00.767) 0:09:29.744 ********* 2026-04-07 02:46:23.295168 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.295175 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.295181 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.295188 | orchestrator | 2026-04-07 02:46:23.295195 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 02:46:23.295201 | orchestrator | Tuesday 07 April 2026 02:46:19 +0000 (0:00:00.400) 0:09:30.145 ********* 2026-04-07 02:46:23.295208 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.295214 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.295221 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.295228 | orchestrator | 2026-04-07 02:46:23.295234 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 02:46:23.295241 | orchestrator | Tuesday 07 April 2026 02:46:19 +0000 (0:00:00.368) 0:09:30.513 ********* 2026-04-07 02:46:23.295247 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.295254 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.295261 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.295267 | orchestrator | 2026-04-07 02:46:23.295273 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 02:46:23.295280 | orchestrator | Tuesday 07 April 2026 02:46:20 +0000 (0:00:00.682) 0:09:31.195 ********* 2026-04-07 02:46:23.295291 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.295298 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.295304 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.295311 | orchestrator | 2026-04-07 02:46:23.295318 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 02:46:23.295324 | orchestrator | Tuesday 07 April 2026 02:46:21 +0000 (0:00:00.429) 0:09:31.625 ********* 2026-04-07 02:46:23.295331 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.295337 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.295344 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.295350 | orchestrator | 2026-04-07 02:46:23.295357 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 02:46:23.295363 | orchestrator | Tuesday 07 April 2026 02:46:21 +0000 (0:00:00.407) 0:09:32.033 ********* 2026-04-07 02:46:23.295370 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.295376 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.295383 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.295389 | orchestrator | 2026-04-07 02:46:23.295396 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 02:46:23.295403 | orchestrator | Tuesday 07 April 2026 02:46:21 +0000 (0:00:00.332) 0:09:32.366 ********* 2026-04-07 02:46:23.295409 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.295416 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.295423 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.295429 | orchestrator | 2026-04-07 02:46:23.295436 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 02:46:23.295442 | orchestrator | Tuesday 07 April 2026 02:46:22 +0000 (0:00:00.688) 0:09:33.054 ********* 2026-04-07 02:46:23.295449 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:46:23.295455 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:46:23.295462 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:46:23.295470 | orchestrator | 2026-04-07 02:46:23.295481 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 02:46:23.295493 | orchestrator | Tuesday 07 April 2026 02:46:22 +0000 (0:00:00.372) 0:09:33.426 ********* 2026-04-07 02:46:23.295504 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:46:23.295520 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:46:23.295533 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:46:23.295544 | orchestrator | 2026-04-07 02:46:23.295563 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 02:47:04.104137 | orchestrator | Tuesday 07 April 2026 02:46:23 +0000 (0:00:00.400) 0:09:33.827 ********* 2026-04-07 02:47:04.104252 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:04.104267 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:04.104277 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:04.104286 | orchestrator | 2026-04-07 02:47:04.104296 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-07 02:47:04.104305 | orchestrator | Tuesday 07 April 2026 02:46:24 +0000 (0:00:00.944) 0:09:34.771 ********* 2026-04-07 02:47:04.104314 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:04.104324 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:04.104333 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-07 02:47:04.104343 | orchestrator | 2026-04-07 02:47:04.104351 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-07 02:47:04.104360 | orchestrator | Tuesday 07 April 2026 02:46:24 +0000 (0:00:00.463) 0:09:35.235 ********* 2026-04-07 02:47:04.104369 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:47:04.104378 | orchestrator | 2026-04-07 02:47:04.104387 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-07 02:47:04.104395 | orchestrator | Tuesday 07 April 2026 02:46:26 +0000 (0:00:02.226) 0:09:37.462 ********* 2026-04-07 02:47:04.104406 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-07 02:47:04.104439 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:04.104455 | orchestrator | 2026-04-07 02:47:04.104470 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-07 02:47:04.104484 | orchestrator | Tuesday 07 April 2026 02:46:27 +0000 (0:00:00.244) 0:09:37.706 ********* 2026-04-07 02:47:04.104517 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 02:47:04.104540 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 02:47:04.104554 | orchestrator | 2026-04-07 02:47:04.104568 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-07 02:47:04.104583 | orchestrator | Tuesday 07 April 2026 02:46:35 +0000 (0:00:08.591) 0:09:46.298 ********* 2026-04-07 02:47:04.104662 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 02:47:04.104679 | orchestrator | 2026-04-07 02:47:04.104695 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-07 02:47:04.104711 | orchestrator | Tuesday 07 April 2026 02:46:39 +0000 (0:00:03.742) 0:09:50.041 ********* 2026-04-07 02:47:04.104727 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:47:04.104745 | orchestrator | 2026-04-07 02:47:04.104762 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-07 02:47:04.104780 | orchestrator | Tuesday 07 April 2026 02:46:40 +0000 (0:00:00.982) 0:09:51.023 ********* 2026-04-07 02:47:04.104799 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-07 02:47:04.104810 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-07 02:47:04.104820 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-07 02:47:04.104831 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-07 02:47:04.104842 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-07 02:47:04.104852 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-07 02:47:04.104863 | orchestrator | 2026-04-07 02:47:04.104873 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-07 02:47:04.104883 | orchestrator | Tuesday 07 April 2026 02:46:41 +0000 (0:00:01.154) 0:09:52.177 ********* 2026-04-07 02:47:04.104894 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:47:04.104904 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 02:47:04.104915 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 02:47:04.104925 | orchestrator | 2026-04-07 02:47:04.104936 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-07 02:47:04.104946 | orchestrator | Tuesday 07 April 2026 02:46:43 +0000 (0:00:02.299) 0:09:54.477 ********* 2026-04-07 02:47:04.104956 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 02:47:04.104968 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 02:47:04.104978 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:04.104989 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 02:47:04.105001 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-07 02:47:04.105011 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:04.105023 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 02:47:04.105033 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-07 02:47:04.105054 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:04.105063 | orchestrator | 2026-04-07 02:47:04.105072 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-07 02:47:04.105099 | orchestrator | Tuesday 07 April 2026 02:46:45 +0000 (0:00:01.296) 0:09:55.773 ********* 2026-04-07 02:47:04.105108 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:04.105117 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:04.105126 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:04.105135 | orchestrator | 2026-04-07 02:47:04.105143 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-07 02:47:04.105152 | orchestrator | Tuesday 07 April 2026 02:46:48 +0000 (0:00:03.138) 0:09:58.912 ********* 2026-04-07 02:47:04.105161 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:04.105169 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:04.105178 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:04.105186 | orchestrator | 2026-04-07 02:47:04.105195 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-07 02:47:04.105204 | orchestrator | Tuesday 07 April 2026 02:46:48 +0000 (0:00:00.346) 0:09:59.258 ********* 2026-04-07 02:47:04.105213 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:47:04.105222 | orchestrator | 2026-04-07 02:47:04.105230 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-07 02:47:04.105239 | orchestrator | Tuesday 07 April 2026 02:46:49 +0000 (0:00:00.912) 0:10:00.170 ********* 2026-04-07 02:47:04.105248 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:47:04.105256 | orchestrator | 2026-04-07 02:47:04.105265 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-07 02:47:04.105274 | orchestrator | Tuesday 07 April 2026 02:46:50 +0000 (0:00:00.649) 0:10:00.820 ********* 2026-04-07 02:47:04.105282 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:04.105291 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:04.105300 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:04.105308 | orchestrator | 2026-04-07 02:47:04.105317 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-07 02:47:04.105331 | orchestrator | Tuesday 07 April 2026 02:46:51 +0000 (0:00:01.372) 0:10:02.192 ********* 2026-04-07 02:47:04.105340 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:04.105349 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:04.105358 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:04.105366 | orchestrator | 2026-04-07 02:47:04.105375 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-07 02:47:04.105384 | orchestrator | Tuesday 07 April 2026 02:46:53 +0000 (0:00:01.520) 0:10:03.712 ********* 2026-04-07 02:47:04.105392 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:04.105401 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:04.105410 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:04.105418 | orchestrator | 2026-04-07 02:47:04.105427 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-07 02:47:04.105436 | orchestrator | Tuesday 07 April 2026 02:46:54 +0000 (0:00:01.796) 0:10:05.509 ********* 2026-04-07 02:47:04.105444 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:04.105453 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:04.105462 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:04.105470 | orchestrator | 2026-04-07 02:47:04.105479 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-07 02:47:04.105488 | orchestrator | Tuesday 07 April 2026 02:46:56 +0000 (0:00:01.937) 0:10:07.446 ********* 2026-04-07 02:47:04.105497 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:04.105506 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:04.105514 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:04.105523 | orchestrator | 2026-04-07 02:47:04.105532 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 02:47:04.105546 | orchestrator | Tuesday 07 April 2026 02:46:58 +0000 (0:00:01.748) 0:10:09.195 ********* 2026-04-07 02:47:04.105555 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:04.105564 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:04.105573 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:04.105581 | orchestrator | 2026-04-07 02:47:04.105590 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-07 02:47:04.105629 | orchestrator | Tuesday 07 April 2026 02:46:59 +0000 (0:00:00.738) 0:10:09.933 ********* 2026-04-07 02:47:04.105640 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:47:04.105649 | orchestrator | 2026-04-07 02:47:04.105658 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-07 02:47:04.105666 | orchestrator | Tuesday 07 April 2026 02:47:00 +0000 (0:00:00.956) 0:10:10.889 ********* 2026-04-07 02:47:04.105675 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:04.105684 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:04.105692 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:04.105701 | orchestrator | 2026-04-07 02:47:04.105709 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-07 02:47:04.105718 | orchestrator | Tuesday 07 April 2026 02:47:00 +0000 (0:00:00.382) 0:10:11.272 ********* 2026-04-07 02:47:04.105726 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:04.105735 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:04.105744 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:04.105753 | orchestrator | 2026-04-07 02:47:04.105761 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-07 02:47:04.105770 | orchestrator | Tuesday 07 April 2026 02:47:02 +0000 (0:00:01.317) 0:10:12.589 ********* 2026-04-07 02:47:04.105779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:47:04.105788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:47:04.105797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:47:04.105805 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:04.105814 | orchestrator | 2026-04-07 02:47:04.105823 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-07 02:47:04.105832 | orchestrator | Tuesday 07 April 2026 02:47:03 +0000 (0:00:01.030) 0:10:13.619 ********* 2026-04-07 02:47:04.105840 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:04.105849 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:04.105863 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.163514 | orchestrator | 2026-04-07 02:47:24.163668 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-07 02:47:24.163687 | orchestrator | 2026-04-07 02:47:24.163699 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 02:47:24.163711 | orchestrator | Tuesday 07 April 2026 02:47:04 +0000 (0:00:01.013) 0:10:14.633 ********* 2026-04-07 02:47:24.163723 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:47:24.163736 | orchestrator | 2026-04-07 02:47:24.163747 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 02:47:24.163757 | orchestrator | Tuesday 07 April 2026 02:47:04 +0000 (0:00:00.611) 0:10:15.245 ********* 2026-04-07 02:47:24.163768 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:47:24.163780 | orchestrator | 2026-04-07 02:47:24.163790 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 02:47:24.163801 | orchestrator | Tuesday 07 April 2026 02:47:05 +0000 (0:00:00.944) 0:10:16.189 ********* 2026-04-07 02:47:24.163812 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.163824 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.163835 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.163869 | orchestrator | 2026-04-07 02:47:24.163882 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 02:47:24.163892 | orchestrator | Tuesday 07 April 2026 02:47:06 +0000 (0:00:00.379) 0:10:16.569 ********* 2026-04-07 02:47:24.163903 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.163914 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.163925 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.163936 | orchestrator | 2026-04-07 02:47:24.163946 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 02:47:24.163957 | orchestrator | Tuesday 07 April 2026 02:47:06 +0000 (0:00:00.795) 0:10:17.364 ********* 2026-04-07 02:47:24.163967 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.163991 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.164003 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.164013 | orchestrator | 2026-04-07 02:47:24.164024 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 02:47:24.164035 | orchestrator | Tuesday 07 April 2026 02:47:07 +0000 (0:00:01.083) 0:10:18.448 ********* 2026-04-07 02:47:24.164045 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.164056 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.164066 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.164077 | orchestrator | 2026-04-07 02:47:24.164088 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 02:47:24.164098 | orchestrator | Tuesday 07 April 2026 02:47:08 +0000 (0:00:00.785) 0:10:19.233 ********* 2026-04-07 02:47:24.164109 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.164132 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.164143 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.164154 | orchestrator | 2026-04-07 02:47:24.164165 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 02:47:24.164175 | orchestrator | Tuesday 07 April 2026 02:47:09 +0000 (0:00:00.342) 0:10:19.576 ********* 2026-04-07 02:47:24.164186 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.164198 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.164208 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.164219 | orchestrator | 2026-04-07 02:47:24.164230 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 02:47:24.164241 | orchestrator | Tuesday 07 April 2026 02:47:09 +0000 (0:00:00.402) 0:10:19.978 ********* 2026-04-07 02:47:24.164251 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.164262 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.164272 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.164283 | orchestrator | 2026-04-07 02:47:24.164294 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 02:47:24.164304 | orchestrator | Tuesday 07 April 2026 02:47:10 +0000 (0:00:00.685) 0:10:20.664 ********* 2026-04-07 02:47:24.164315 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.164326 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.164336 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.164347 | orchestrator | 2026-04-07 02:47:24.164357 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 02:47:24.164368 | orchestrator | Tuesday 07 April 2026 02:47:10 +0000 (0:00:00.786) 0:10:21.450 ********* 2026-04-07 02:47:24.164379 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.164389 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.164400 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.164410 | orchestrator | 2026-04-07 02:47:24.164421 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 02:47:24.164432 | orchestrator | Tuesday 07 April 2026 02:47:11 +0000 (0:00:00.795) 0:10:22.245 ********* 2026-04-07 02:47:24.164442 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.164453 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.164464 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.164474 | orchestrator | 2026-04-07 02:47:24.164485 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 02:47:24.164505 | orchestrator | Tuesday 07 April 2026 02:47:12 +0000 (0:00:00.390) 0:10:22.636 ********* 2026-04-07 02:47:24.164515 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.164526 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.164538 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.164548 | orchestrator | 2026-04-07 02:47:24.164559 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 02:47:24.164570 | orchestrator | Tuesday 07 April 2026 02:47:12 +0000 (0:00:00.678) 0:10:23.314 ********* 2026-04-07 02:47:24.164581 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.164614 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.164634 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.164653 | orchestrator | 2026-04-07 02:47:24.164672 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 02:47:24.164700 | orchestrator | Tuesday 07 April 2026 02:47:13 +0000 (0:00:00.398) 0:10:23.713 ********* 2026-04-07 02:47:24.164722 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.164764 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.164783 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.164798 | orchestrator | 2026-04-07 02:47:24.164816 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 02:47:24.164835 | orchestrator | Tuesday 07 April 2026 02:47:13 +0000 (0:00:00.406) 0:10:24.120 ********* 2026-04-07 02:47:24.164855 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.164875 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.164896 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.164916 | orchestrator | 2026-04-07 02:47:24.164937 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 02:47:24.164952 | orchestrator | Tuesday 07 April 2026 02:47:13 +0000 (0:00:00.379) 0:10:24.499 ********* 2026-04-07 02:47:24.164963 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.164973 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.164984 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.164995 | orchestrator | 2026-04-07 02:47:24.165006 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 02:47:24.165016 | orchestrator | Tuesday 07 April 2026 02:47:14 +0000 (0:00:00.693) 0:10:25.192 ********* 2026-04-07 02:47:24.165027 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.165038 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.165049 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.165059 | orchestrator | 2026-04-07 02:47:24.165070 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 02:47:24.165081 | orchestrator | Tuesday 07 April 2026 02:47:15 +0000 (0:00:00.362) 0:10:25.555 ********* 2026-04-07 02:47:24.165091 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.165102 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.165113 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.165123 | orchestrator | 2026-04-07 02:47:24.165134 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 02:47:24.165145 | orchestrator | Tuesday 07 April 2026 02:47:15 +0000 (0:00:00.350) 0:10:25.905 ********* 2026-04-07 02:47:24.165156 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.165167 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.165177 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.165188 | orchestrator | 2026-04-07 02:47:24.165208 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 02:47:24.165219 | orchestrator | Tuesday 07 April 2026 02:47:15 +0000 (0:00:00.376) 0:10:26.282 ********* 2026-04-07 02:47:24.165230 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:47:24.165240 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:47:24.165251 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:47:24.165262 | orchestrator | 2026-04-07 02:47:24.165273 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-07 02:47:24.165284 | orchestrator | Tuesday 07 April 2026 02:47:16 +0000 (0:00:00.955) 0:10:27.238 ********* 2026-04-07 02:47:24.165307 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:47:24.165318 | orchestrator | 2026-04-07 02:47:24.165329 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-07 02:47:24.165339 | orchestrator | Tuesday 07 April 2026 02:47:17 +0000 (0:00:00.799) 0:10:28.038 ********* 2026-04-07 02:47:24.165350 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:47:24.165361 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 02:47:24.165372 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 02:47:24.165384 | orchestrator | 2026-04-07 02:47:24.165402 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-07 02:47:24.165422 | orchestrator | Tuesday 07 April 2026 02:47:20 +0000 (0:00:02.703) 0:10:30.741 ********* 2026-04-07 02:47:24.165451 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 02:47:24.165471 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-07 02:47:24.165489 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:47:24.165507 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 02:47:24.165523 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-07 02:47:24.165539 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:47:24.165558 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 02:47:24.165574 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-07 02:47:24.165621 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:47:24.165642 | orchestrator | 2026-04-07 02:47:24.165661 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-07 02:47:24.165680 | orchestrator | Tuesday 07 April 2026 02:47:21 +0000 (0:00:01.735) 0:10:32.477 ********* 2026-04-07 02:47:24.165699 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:47:24.165717 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:47:24.165731 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:47:24.165742 | orchestrator | 2026-04-07 02:47:24.165753 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-07 02:47:24.165763 | orchestrator | Tuesday 07 April 2026 02:47:22 +0000 (0:00:00.356) 0:10:32.834 ********* 2026-04-07 02:47:24.165774 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:47:24.165785 | orchestrator | 2026-04-07 02:47:24.165796 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-07 02:47:24.165807 | orchestrator | Tuesday 07 April 2026 02:47:23 +0000 (0:00:00.905) 0:10:33.739 ********* 2026-04-07 02:47:24.165819 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 02:47:24.165832 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 02:47:24.165855 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 02:48:15.737484 | orchestrator | 2026-04-07 02:48:15.737618 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-07 02:48:15.737636 | orchestrator | Tuesday 07 April 2026 02:47:24 +0000 (0:00:00.943) 0:10:34.683 ********* 2026-04-07 02:48:15.737647 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:48:15.737659 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-07 02:48:15.737670 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:48:15.737680 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-07 02:48:15.737714 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:48:15.737725 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-07 02:48:15.737735 | orchestrator | 2026-04-07 02:48:15.737745 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-07 02:48:15.737754 | orchestrator | Tuesday 07 April 2026 02:47:28 +0000 (0:00:04.745) 0:10:39.429 ********* 2026-04-07 02:48:15.737764 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:48:15.737774 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 02:48:15.737784 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:48:15.737793 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 02:48:15.737803 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:48:15.737839 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 02:48:15.737859 | orchestrator | 2026-04-07 02:48:15.737868 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-07 02:48:15.737878 | orchestrator | Tuesday 07 April 2026 02:47:31 +0000 (0:00:02.273) 0:10:41.703 ********* 2026-04-07 02:48:15.737889 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 02:48:15.737899 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:48:15.737909 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 02:48:15.737919 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:48:15.737929 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 02:48:15.737938 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:48:15.737948 | orchestrator | 2026-04-07 02:48:15.737958 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-07 02:48:15.737967 | orchestrator | Tuesday 07 April 2026 02:47:32 +0000 (0:00:01.482) 0:10:43.185 ********* 2026-04-07 02:48:15.737977 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-07 02:48:15.737986 | orchestrator | 2026-04-07 02:48:15.737998 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-07 02:48:15.738009 | orchestrator | Tuesday 07 April 2026 02:47:32 +0000 (0:00:00.254) 0:10:43.440 ********* 2026-04-07 02:48:15.738073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738132 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:15.738143 | orchestrator | 2026-04-07 02:48:15.738153 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-07 02:48:15.738165 | orchestrator | Tuesday 07 April 2026 02:47:33 +0000 (0:00:00.697) 0:10:44.137 ********* 2026-04-07 02:48:15.738176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 02:48:15.738239 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:15.738250 | orchestrator | 2026-04-07 02:48:15.738262 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-07 02:48:15.738273 | orchestrator | Tuesday 07 April 2026 02:47:34 +0000 (0:00:00.728) 0:10:44.865 ********* 2026-04-07 02:48:15.738303 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 02:48:15.738315 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 02:48:15.738325 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 02:48:15.738334 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 02:48:15.738344 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-07 02:48:15.738354 | orchestrator | 2026-04-07 02:48:15.738363 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-07 02:48:15.738373 | orchestrator | Tuesday 07 April 2026 02:48:04 +0000 (0:00:29.886) 0:11:14.752 ********* 2026-04-07 02:48:15.738383 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:15.738392 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:15.738402 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:15.738411 | orchestrator | 2026-04-07 02:48:15.738421 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-07 02:48:15.738431 | orchestrator | Tuesday 07 April 2026 02:48:04 +0000 (0:00:00.379) 0:11:15.132 ********* 2026-04-07 02:48:15.738441 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:15.738450 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:15.738460 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:15.738469 | orchestrator | 2026-04-07 02:48:15.738485 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-07 02:48:15.738494 | orchestrator | Tuesday 07 April 2026 02:48:04 +0000 (0:00:00.344) 0:11:15.476 ********* 2026-04-07 02:48:15.738504 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:48:15.738514 | orchestrator | 2026-04-07 02:48:15.738524 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-07 02:48:15.738533 | orchestrator | Tuesday 07 April 2026 02:48:05 +0000 (0:00:00.955) 0:11:16.431 ********* 2026-04-07 02:48:15.738543 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:48:15.738553 | orchestrator | 2026-04-07 02:48:15.738562 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-07 02:48:15.738572 | orchestrator | Tuesday 07 April 2026 02:48:06 +0000 (0:00:00.910) 0:11:17.342 ********* 2026-04-07 02:48:15.738581 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:48:15.738592 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:48:15.738714 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:48:15.738733 | orchestrator | 2026-04-07 02:48:15.738743 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-07 02:48:15.738753 | orchestrator | Tuesday 07 April 2026 02:48:08 +0000 (0:00:01.331) 0:11:18.673 ********* 2026-04-07 02:48:15.738771 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:48:15.738781 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:48:15.738790 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:48:15.738800 | orchestrator | 2026-04-07 02:48:15.738810 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-07 02:48:15.738819 | orchestrator | Tuesday 07 April 2026 02:48:09 +0000 (0:00:01.265) 0:11:19.938 ********* 2026-04-07 02:48:15.738829 | orchestrator | changed: [testbed-node-3] 2026-04-07 02:48:15.738838 | orchestrator | changed: [testbed-node-4] 2026-04-07 02:48:15.738848 | orchestrator | changed: [testbed-node-5] 2026-04-07 02:48:15.738857 | orchestrator | 2026-04-07 02:48:15.738867 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-07 02:48:15.738876 | orchestrator | Tuesday 07 April 2026 02:48:11 +0000 (0:00:01.864) 0:11:21.803 ********* 2026-04-07 02:48:15.738886 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-07 02:48:15.738895 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-07 02:48:15.738905 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-07 02:48:15.738915 | orchestrator | 2026-04-07 02:48:15.738924 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-07 02:48:15.738934 | orchestrator | Tuesday 07 April 2026 02:48:14 +0000 (0:00:02.862) 0:11:24.665 ********* 2026-04-07 02:48:15.738943 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:15.738953 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:15.738962 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:15.738972 | orchestrator | 2026-04-07 02:48:15.738981 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-07 02:48:15.738989 | orchestrator | Tuesday 07 April 2026 02:48:14 +0000 (0:00:00.425) 0:11:25.091 ********* 2026-04-07 02:48:15.738997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:48:15.739005 | orchestrator | 2026-04-07 02:48:15.739013 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-07 02:48:15.739021 | orchestrator | Tuesday 07 April 2026 02:48:15 +0000 (0:00:00.953) 0:11:26.045 ********* 2026-04-07 02:48:15.739035 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:18.592951 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:18.593036 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:18.593045 | orchestrator | 2026-04-07 02:48:18.593052 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-07 02:48:18.593059 | orchestrator | Tuesday 07 April 2026 02:48:15 +0000 (0:00:00.389) 0:11:26.435 ********* 2026-04-07 02:48:18.593065 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:18.593072 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:18.593077 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:18.593083 | orchestrator | 2026-04-07 02:48:18.593088 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-07 02:48:18.593094 | orchestrator | Tuesday 07 April 2026 02:48:16 +0000 (0:00:00.407) 0:11:26.843 ********* 2026-04-07 02:48:18.593100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:48:18.593106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:48:18.593111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:48:18.593116 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:18.593121 | orchestrator | 2026-04-07 02:48:18.593127 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-07 02:48:18.593132 | orchestrator | Tuesday 07 April 2026 02:48:17 +0000 (0:00:01.064) 0:11:27.908 ********* 2026-04-07 02:48:18.593138 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:18.593144 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:18.593169 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:18.593175 | orchestrator | 2026-04-07 02:48:18.593181 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:48:18.593187 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-07 02:48:18.593205 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-07 02:48:18.593211 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-07 02:48:18.593216 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-07 02:48:18.593222 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-07 02:48:18.593227 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-07 02:48:18.593233 | orchestrator | 2026-04-07 02:48:18.593238 | orchestrator | 2026-04-07 02:48:18.593243 | orchestrator | 2026-04-07 02:48:18.593249 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:48:18.593254 | orchestrator | Tuesday 07 April 2026 02:48:18 +0000 (0:00:00.653) 0:11:28.562 ********* 2026-04-07 02:48:18.593260 | orchestrator | =============================================================================== 2026-04-07 02:48:18.593265 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 65.23s 2026-04-07 02:48:18.593270 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.82s 2026-04-07 02:48:18.593276 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.89s 2026-04-07 02:48:18.593281 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.51s 2026-04-07 02:48:18.593286 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.07s 2026-04-07 02:48:18.593292 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.84s 2026-04-07 02:48:18.593297 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.62s 2026-04-07 02:48:18.593302 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.48s 2026-04-07 02:48:18.593308 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.59s 2026-04-07 02:48:18.593313 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.59s 2026-04-07 02:48:18.593318 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.97s 2026-04-07 02:48:18.593324 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.63s 2026-04-07 02:48:18.593329 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.38s 2026-04-07 02:48:18.593334 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.75s 2026-04-07 02:48:18.593340 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.33s 2026-04-07 02:48:18.593345 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.24s 2026-04-07 02:48:18.593351 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.99s 2026-04-07 02:48:18.593356 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.74s 2026-04-07 02:48:18.593361 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.43s 2026-04-07 02:48:18.593367 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.38s 2026-04-07 02:48:21.368938 | orchestrator | 2026-04-07 02:48:21 | INFO  | Task 66400560-f88b-4dbf-98de-bdb8563a7254 (ceph-pools) was prepared for execution. 2026-04-07 02:48:21.369055 | orchestrator | 2026-04-07 02:48:21 | INFO  | It takes a moment until task 66400560-f88b-4dbf-98de-bdb8563a7254 (ceph-pools) has been started and output is visible here. 2026-04-07 02:48:37.087810 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 02:48:37.087945 | orchestrator | 2.16.14 2026-04-07 02:48:37.087964 | orchestrator | 2026-04-07 02:48:37.087977 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-07 02:48:37.087990 | orchestrator | 2026-04-07 02:48:37.088001 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-07 02:48:37.088012 | orchestrator | Tuesday 07 April 2026 02:48:26 +0000 (0:00:00.672) 0:00:00.672 ********* 2026-04-07 02:48:37.088036 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:48:37.088098 | orchestrator | 2026-04-07 02:48:37.088113 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-07 02:48:37.088124 | orchestrator | Tuesday 07 April 2026 02:48:27 +0000 (0:00:00.756) 0:00:01.428 ********* 2026-04-07 02:48:37.088135 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:37.088146 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:37.088156 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:37.088167 | orchestrator | 2026-04-07 02:48:37.088178 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-07 02:48:37.088189 | orchestrator | Tuesday 07 April 2026 02:48:27 +0000 (0:00:00.781) 0:00:02.210 ********* 2026-04-07 02:48:37.088200 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:37.088211 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:37.088221 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:37.088232 | orchestrator | 2026-04-07 02:48:37.088243 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 02:48:37.088254 | orchestrator | Tuesday 07 April 2026 02:48:28 +0000 (0:00:00.322) 0:00:02.532 ********* 2026-04-07 02:48:37.088265 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:37.088275 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:37.088286 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:37.088296 | orchestrator | 2026-04-07 02:48:37.088321 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 02:48:37.088333 | orchestrator | Tuesday 07 April 2026 02:48:29 +0000 (0:00:00.970) 0:00:03.503 ********* 2026-04-07 02:48:37.088346 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:37.088359 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:37.088372 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:37.088384 | orchestrator | 2026-04-07 02:48:37.088396 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-07 02:48:37.088408 | orchestrator | Tuesday 07 April 2026 02:48:29 +0000 (0:00:00.355) 0:00:03.859 ********* 2026-04-07 02:48:37.088421 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:37.088433 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:37.088445 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:37.088458 | orchestrator | 2026-04-07 02:48:37.088471 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-07 02:48:37.088483 | orchestrator | Tuesday 07 April 2026 02:48:29 +0000 (0:00:00.361) 0:00:04.220 ********* 2026-04-07 02:48:37.088495 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:37.088507 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:37.088520 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:37.088532 | orchestrator | 2026-04-07 02:48:37.088545 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-07 02:48:37.088558 | orchestrator | Tuesday 07 April 2026 02:48:30 +0000 (0:00:00.362) 0:00:04.582 ********* 2026-04-07 02:48:37.088570 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:37.088585 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:37.088598 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:37.088634 | orchestrator | 2026-04-07 02:48:37.088647 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-07 02:48:37.088684 | orchestrator | Tuesday 07 April 2026 02:48:30 +0000 (0:00:00.670) 0:00:05.253 ********* 2026-04-07 02:48:37.088697 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:37.088711 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:37.088722 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:37.088732 | orchestrator | 2026-04-07 02:48:37.088744 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-07 02:48:37.088754 | orchestrator | Tuesday 07 April 2026 02:48:31 +0000 (0:00:00.395) 0:00:05.649 ********* 2026-04-07 02:48:37.088766 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:48:37.088777 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:48:37.088787 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:48:37.088798 | orchestrator | 2026-04-07 02:48:37.088809 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-07 02:48:37.088819 | orchestrator | Tuesday 07 April 2026 02:48:31 +0000 (0:00:00.710) 0:00:06.360 ********* 2026-04-07 02:48:37.088830 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:37.088840 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:37.088851 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:37.088862 | orchestrator | 2026-04-07 02:48:37.088872 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-07 02:48:37.088890 | orchestrator | Tuesday 07 April 2026 02:48:32 +0000 (0:00:00.496) 0:00:06.856 ********* 2026-04-07 02:48:37.088908 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:48:37.088927 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:48:37.088943 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:48:37.088961 | orchestrator | 2026-04-07 02:48:37.088982 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-07 02:48:37.088999 | orchestrator | Tuesday 07 April 2026 02:48:34 +0000 (0:00:02.177) 0:00:09.033 ********* 2026-04-07 02:48:37.089020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 02:48:37.089040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 02:48:37.089060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 02:48:37.089072 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:37.089083 | orchestrator | 2026-04-07 02:48:37.089113 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-07 02:48:37.089129 | orchestrator | Tuesday 07 April 2026 02:48:35 +0000 (0:00:00.741) 0:00:09.774 ********* 2026-04-07 02:48:37.089149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 02:48:37.089171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 02:48:37.089192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 02:48:37.089211 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:37.089229 | orchestrator | 2026-04-07 02:48:37.089246 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-07 02:48:37.089257 | orchestrator | Tuesday 07 April 2026 02:48:36 +0000 (0:00:01.223) 0:00:10.998 ********* 2026-04-07 02:48:37.089278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:37.089303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:37.089315 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:37.089326 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:37.089337 | orchestrator | 2026-04-07 02:48:37.089348 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-07 02:48:37.089358 | orchestrator | Tuesday 07 April 2026 02:48:36 +0000 (0:00:00.200) 0:00:11.199 ********* 2026-04-07 02:48:37.089372 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4cd0634997ff', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-07 02:48:33.436537', 'end': '2026-04-07 02:48:33.473900', 'delta': '0:00:00.037363', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4cd0634997ff'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-07 02:48:37.089386 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e8d9f46c7c23', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-07 02:48:33.981087', 'end': '2026-04-07 02:48:34.015636', 'delta': '0:00:00.034549', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8d9f46c7c23'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-07 02:48:37.089408 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f4f6ca89ad43', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-07 02:48:34.502198', 'end': '2026-04-07 02:48:34.541782', 'delta': '0:00:00.039584', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f4f6ca89ad43'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-07 02:48:44.756186 | orchestrator | 2026-04-07 02:48:44.756291 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-07 02:48:44.756401 | orchestrator | Tuesday 07 April 2026 02:48:37 +0000 (0:00:00.236) 0:00:11.435 ********* 2026-04-07 02:48:44.756444 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:44.756458 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:44.756471 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:44.756484 | orchestrator | 2026-04-07 02:48:44.756497 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-07 02:48:44.756509 | orchestrator | Tuesday 07 April 2026 02:48:37 +0000 (0:00:00.504) 0:00:11.940 ********* 2026-04-07 02:48:44.756522 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-07 02:48:44.756535 | orchestrator | 2026-04-07 02:48:44.756561 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-07 02:48:44.756574 | orchestrator | Tuesday 07 April 2026 02:48:39 +0000 (0:00:01.534) 0:00:13.475 ********* 2026-04-07 02:48:44.756587 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.756598 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.756689 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.756702 | orchestrator | 2026-04-07 02:48:44.756713 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-07 02:48:44.756725 | orchestrator | Tuesday 07 April 2026 02:48:39 +0000 (0:00:00.333) 0:00:13.808 ********* 2026-04-07 02:48:44.756737 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.756749 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.756760 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.756772 | orchestrator | 2026-04-07 02:48:44.756783 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 02:48:44.756795 | orchestrator | Tuesday 07 April 2026 02:48:40 +0000 (0:00:01.202) 0:00:15.011 ********* 2026-04-07 02:48:44.756807 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.756820 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.756832 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.756846 | orchestrator | 2026-04-07 02:48:44.756859 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-07 02:48:44.756872 | orchestrator | Tuesday 07 April 2026 02:48:40 +0000 (0:00:00.320) 0:00:15.331 ********* 2026-04-07 02:48:44.756884 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:44.756897 | orchestrator | 2026-04-07 02:48:44.756909 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-07 02:48:44.756923 | orchestrator | Tuesday 07 April 2026 02:48:41 +0000 (0:00:00.131) 0:00:15.462 ********* 2026-04-07 02:48:44.756935 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.756948 | orchestrator | 2026-04-07 02:48:44.756961 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 02:48:44.756973 | orchestrator | Tuesday 07 April 2026 02:48:41 +0000 (0:00:00.250) 0:00:15.713 ********* 2026-04-07 02:48:44.756986 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.756998 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.757034 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.757046 | orchestrator | 2026-04-07 02:48:44.757057 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-07 02:48:44.757068 | orchestrator | Tuesday 07 April 2026 02:48:41 +0000 (0:00:00.318) 0:00:16.031 ********* 2026-04-07 02:48:44.757079 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.757091 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.757102 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.757113 | orchestrator | 2026-04-07 02:48:44.757125 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-07 02:48:44.757136 | orchestrator | Tuesday 07 April 2026 02:48:42 +0000 (0:00:00.368) 0:00:16.400 ********* 2026-04-07 02:48:44.757147 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.757158 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.757169 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.757180 | orchestrator | 2026-04-07 02:48:44.757191 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-07 02:48:44.757201 | orchestrator | Tuesday 07 April 2026 02:48:42 +0000 (0:00:00.616) 0:00:17.017 ********* 2026-04-07 02:48:44.757222 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.757234 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.757244 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.757256 | orchestrator | 2026-04-07 02:48:44.757266 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-07 02:48:44.757278 | orchestrator | Tuesday 07 April 2026 02:48:43 +0000 (0:00:00.400) 0:00:17.417 ********* 2026-04-07 02:48:44.757289 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.757299 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.757309 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.757320 | orchestrator | 2026-04-07 02:48:44.757330 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-07 02:48:44.757340 | orchestrator | Tuesday 07 April 2026 02:48:43 +0000 (0:00:00.356) 0:00:17.774 ********* 2026-04-07 02:48:44.757351 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.757361 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.757371 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.757382 | orchestrator | 2026-04-07 02:48:44.757392 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-07 02:48:44.757404 | orchestrator | Tuesday 07 April 2026 02:48:44 +0000 (0:00:00.643) 0:00:18.417 ********* 2026-04-07 02:48:44.757416 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.757428 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.757439 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:44.757450 | orchestrator | 2026-04-07 02:48:44.757498 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-07 02:48:44.757511 | orchestrator | Tuesday 07 April 2026 02:48:44 +0000 (0:00:00.379) 0:00:18.797 ********* 2026-04-07 02:48:44.757548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a', 'dm-uuid-LVM-Iy8rcFTCo5W5yRGOTreEEQjp17ko3Q41z5GT9DF2n3y0jUXUATRgcvWUva5Hkl5i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a', 'dm-uuid-LVM-bglYLCxgkD3Qei681bqPmMF5XF5Cd1MSWl8BDXhbFTKiwBIAb3oEgAczEGV9LXaZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.757715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.813935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.814069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KbQcdi-US6m-bhDi-eJCV-lYyz-1b3q-6dXcPl', 'scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc', 'scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.814081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kNGUrC-NTT1-tndE-pJPs-WGt9-udV7-3Eh5Id', 'scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539', 'scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.814103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc', 'scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.814116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.814124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f', 'dm-uuid-LVM-AwooBDvX7rFetLSgq1Ce0QV9OX4RcM369HiqdQcuvs1yzXwuZ5Vmxo8NwSkEMSV8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.814136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09', 'dm-uuid-LVM-bMsdwvKXiGbLYxQ2sqen2wd8SFVCxkJLQE7kiiwsLEGhL2FNSj6gPgLd2pZMGUoL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.814144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.814153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.814160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.814166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.814176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.967732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.967804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.967815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.967837 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:44.967848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.967868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uFQjDD-6Vwu-b0Df-kkau-8GoO-290Z-GefUFg', 'scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c', 'scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.967879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C8sbvR-d1U1-401x-XxcV-6mPF-9ypK-VoR24u', 'scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f', 'scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.967889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc', 'scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.967895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:44.967901 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:44.967906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2', 'dm-uuid-LVM-T2EndjdOS29FjzC5jDtGOSk25DBRWo663ZaUiQtM2GjT3MT0SRy5sJS0UQzmoSs8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.967912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d', 'dm-uuid-LVM-70S3mOSclp5fTNOIhfFxohdLg5UX463GstIgbONbBmukx2iBeuHV5bIO1Eujm1WX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.967917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:44.967926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:45.279162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:45.279248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:45.279276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:45.279282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:45.279286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:45.279291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-07 02:48:45.279318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:45.279331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UDOgFQ-qFyi-gVi2-LQBC-OZQf-u9TS-0kON4x', 'scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7', 'scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:45.279338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ISxGDZ-smz1-74tU-v9PH-Tqzx-sLKc-qKqsod', 'scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d', 'scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:45.279344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599', 'scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:45.279350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-07 02:48:45.279356 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:45.279362 | orchestrator | 2026-04-07 02:48:45.279368 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-07 02:48:45.279374 | orchestrator | Tuesday 07 April 2026 02:48:45 +0000 (0:00:00.715) 0:00:19.513 ********* 2026-04-07 02:48:45.279384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a', 'dm-uuid-LVM-Iy8rcFTCo5W5yRGOTreEEQjp17ko3Q41z5GT9DF2n3y0jUXUATRgcvWUva5Hkl5i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.401762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a', 'dm-uuid-LVM-bglYLCxgkD3Qei681bqPmMF5XF5Cd1MSWl8BDXhbFTKiwBIAb3oEgAczEGV9LXaZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.401862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.401875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.401883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.401891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.401899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.401968 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f', 'dm-uuid-LVM-AwooBDvX7rFetLSgq1Ce0QV9OX4RcM369HiqdQcuvs1yzXwuZ5Vmxo8NwSkEMSV8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.402003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.402011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09', 'dm-uuid-LVM-bMsdwvKXiGbLYxQ2sqen2wd8SFVCxkJLQE7kiiwsLEGhL2FNSj6gPgLd2pZMGUoL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.402067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.402075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.402084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.402110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520009 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520124 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KbQcdi-US6m-bhDi-eJCV-lYyz-1b3q-6dXcPl', 'scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc', 'scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520182 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kNGUrC-NTT1-tndE-pJPs-WGt9-udV7-3Eh5Id', 'scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539', 'scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc', 'scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520212 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.520223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653252 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653359 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:45.653383 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uFQjDD-6Vwu-b0Df-kkau-8GoO-290Z-GefUFg', 'scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c', 'scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C8sbvR-d1U1-401x-XxcV-6mPF-9ypK-VoR24u', 'scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f', 'scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2', 'dm-uuid-LVM-T2EndjdOS29FjzC5jDtGOSk25DBRWo663ZaUiQtM2GjT3MT0SRy5sJS0UQzmoSs8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc', 'scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d', 'dm-uuid-LVM-70S3mOSclp5fTNOIhfFxohdLg5UX463GstIgbONbBmukx2iBeuHV5bIO1Eujm1WX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653556 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.653570 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:45.653592 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806315 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806416 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806449 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806518 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UDOgFQ-qFyi-gVi2-LQBC-OZQf-u9TS-0kON4x', 'scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7', 'scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:45.806530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ISxGDZ-smz1-74tU-v9PH-Tqzx-sLKc-qKqsod', 'scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d', 'scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:57.179880 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599', 'scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:57.180002 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-07-01-23-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-07 02:48:57.180048 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:57.180063 | orchestrator | 2026-04-07 02:48:57.180076 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-07 02:48:57.180088 | orchestrator | Tuesday 07 April 2026 02:48:45 +0000 (0:00:00.645) 0:00:20.159 ********* 2026-04-07 02:48:57.180099 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:57.180112 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:57.180123 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:57.180133 | orchestrator | 2026-04-07 02:48:57.180144 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-07 02:48:57.180155 | orchestrator | Tuesday 07 April 2026 02:48:46 +0000 (0:00:00.937) 0:00:21.097 ********* 2026-04-07 02:48:57.180166 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:57.180177 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:57.180187 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:57.180198 | orchestrator | 2026-04-07 02:48:57.180209 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 02:48:57.180219 | orchestrator | Tuesday 07 April 2026 02:48:47 +0000 (0:00:00.332) 0:00:21.429 ********* 2026-04-07 02:48:57.180230 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:57.180241 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:57.180252 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:57.180264 | orchestrator | 2026-04-07 02:48:57.180301 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 02:48:57.180320 | orchestrator | Tuesday 07 April 2026 02:48:47 +0000 (0:00:00.700) 0:00:22.130 ********* 2026-04-07 02:48:57.180338 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.180356 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:57.180376 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:57.180396 | orchestrator | 2026-04-07 02:48:57.180415 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 02:48:57.180434 | orchestrator | Tuesday 07 April 2026 02:48:48 +0000 (0:00:00.338) 0:00:22.468 ********* 2026-04-07 02:48:57.180448 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.180461 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:57.180473 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:57.180485 | orchestrator | 2026-04-07 02:48:57.180512 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 02:48:57.180525 | orchestrator | Tuesday 07 April 2026 02:48:48 +0000 (0:00:00.785) 0:00:23.253 ********* 2026-04-07 02:48:57.180538 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.180551 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:57.180563 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:57.180575 | orchestrator | 2026-04-07 02:48:57.180588 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 02:48:57.180601 | orchestrator | Tuesday 07 April 2026 02:48:49 +0000 (0:00:00.335) 0:00:23.589 ********* 2026-04-07 02:48:57.180816 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-07 02:48:57.180834 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-07 02:48:57.180845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-07 02:48:57.180856 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-07 02:48:57.180867 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-07 02:48:57.180878 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-07 02:48:57.180889 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-07 02:48:57.180916 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-07 02:48:57.180927 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-07 02:48:57.180938 | orchestrator | 2026-04-07 02:48:57.180950 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 02:48:57.180961 | orchestrator | Tuesday 07 April 2026 02:48:50 +0000 (0:00:01.173) 0:00:24.763 ********* 2026-04-07 02:48:57.180993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 02:48:57.181006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 02:48:57.181016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 02:48:57.181027 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.181038 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-07 02:48:57.181049 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-07 02:48:57.181060 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-07 02:48:57.181071 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:57.181082 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-07 02:48:57.181092 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-07 02:48:57.181103 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-07 02:48:57.181114 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:57.181125 | orchestrator | 2026-04-07 02:48:57.181136 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-07 02:48:57.181147 | orchestrator | Tuesday 07 April 2026 02:48:50 +0000 (0:00:00.416) 0:00:25.179 ********* 2026-04-07 02:48:57.181158 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:48:57.181169 | orchestrator | 2026-04-07 02:48:57.181181 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 02:48:57.181193 | orchestrator | Tuesday 07 April 2026 02:48:51 +0000 (0:00:00.882) 0:00:26.061 ********* 2026-04-07 02:48:57.181204 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.181215 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:57.181226 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:57.181237 | orchestrator | 2026-04-07 02:48:57.181248 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 02:48:57.181258 | orchestrator | Tuesday 07 April 2026 02:48:52 +0000 (0:00:00.380) 0:00:26.442 ********* 2026-04-07 02:48:57.181269 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.181280 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:57.181291 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:57.181301 | orchestrator | 2026-04-07 02:48:57.181312 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 02:48:57.181323 | orchestrator | Tuesday 07 April 2026 02:48:52 +0000 (0:00:00.385) 0:00:26.827 ********* 2026-04-07 02:48:57.181334 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.181343 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:48:57.181353 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:48:57.181362 | orchestrator | 2026-04-07 02:48:57.181372 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 02:48:57.181382 | orchestrator | Tuesday 07 April 2026 02:48:53 +0000 (0:00:00.638) 0:00:27.466 ********* 2026-04-07 02:48:57.181392 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:57.181401 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:57.181411 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:57.181420 | orchestrator | 2026-04-07 02:48:57.181430 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 02:48:57.181440 | orchestrator | Tuesday 07 April 2026 02:48:53 +0000 (0:00:00.455) 0:00:27.922 ********* 2026-04-07 02:48:57.181449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:48:57.181466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:48:57.181483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:48:57.181493 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.181503 | orchestrator | 2026-04-07 02:48:57.181513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 02:48:57.181523 | orchestrator | Tuesday 07 April 2026 02:48:53 +0000 (0:00:00.421) 0:00:28.344 ********* 2026-04-07 02:48:57.181533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:48:57.181542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:48:57.181552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:48:57.181561 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.181571 | orchestrator | 2026-04-07 02:48:57.181580 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 02:48:57.181590 | orchestrator | Tuesday 07 April 2026 02:48:54 +0000 (0:00:00.466) 0:00:28.810 ********* 2026-04-07 02:48:57.181600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 02:48:57.181644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 02:48:57.181658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 02:48:57.181668 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:48:57.181677 | orchestrator | 2026-04-07 02:48:57.181687 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 02:48:57.181697 | orchestrator | Tuesday 07 April 2026 02:48:54 +0000 (0:00:00.433) 0:00:29.243 ********* 2026-04-07 02:48:57.181706 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:48:57.181716 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:48:57.181726 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:48:57.181735 | orchestrator | 2026-04-07 02:48:57.181745 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 02:48:57.181754 | orchestrator | Tuesday 07 April 2026 02:48:55 +0000 (0:00:00.394) 0:00:29.638 ********* 2026-04-07 02:48:57.181764 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 02:48:57.181774 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-07 02:48:57.181783 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-07 02:48:57.181792 | orchestrator | 2026-04-07 02:48:57.181802 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-07 02:48:57.181812 | orchestrator | Tuesday 07 April 2026 02:48:56 +0000 (0:00:00.968) 0:00:30.607 ********* 2026-04-07 02:48:57.181821 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:48:57.181838 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:50:37.888810 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:50:37.888921 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-07 02:50:37.888936 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 02:50:37.888948 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 02:50:37.888958 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 02:50:37.888968 | orchestrator | 2026-04-07 02:50:37.888979 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-07 02:50:37.888990 | orchestrator | Tuesday 07 April 2026 02:48:57 +0000 (0:00:00.917) 0:00:31.524 ********* 2026-04-07 02:50:37.888999 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-07 02:50:37.889009 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 02:50:37.889019 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 02:50:37.889028 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-07 02:50:37.889060 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 02:50:37.889071 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 02:50:37.889080 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 02:50:37.889090 | orchestrator | 2026-04-07 02:50:37.889099 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-07 02:50:37.889109 | orchestrator | Tuesday 07 April 2026 02:48:59 +0000 (0:00:01.969) 0:00:33.493 ********* 2026-04-07 02:50:37.889118 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:50:37.889129 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:50:37.889139 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-07 02:50:37.889149 | orchestrator | 2026-04-07 02:50:37.889159 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-07 02:50:37.889168 | orchestrator | Tuesday 07 April 2026 02:48:59 +0000 (0:00:00.433) 0:00:33.927 ********* 2026-04-07 02:50:37.889180 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 02:50:37.889191 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 02:50:37.889215 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 02:50:37.889225 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 02:50:37.889235 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-07 02:50:37.889245 | orchestrator | 2026-04-07 02:50:37.889255 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-07 02:50:37.889265 | orchestrator | Tuesday 07 April 2026 02:49:43 +0000 (0:00:44.381) 0:01:18.308 ********* 2026-04-07 02:50:37.889274 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889284 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889293 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889303 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889312 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889322 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889332 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-07 02:50:37.889344 | orchestrator | 2026-04-07 02:50:37.889361 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-07 02:50:37.889377 | orchestrator | Tuesday 07 April 2026 02:50:07 +0000 (0:00:23.835) 0:01:42.143 ********* 2026-04-07 02:50:37.889412 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889440 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889456 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889472 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889489 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889506 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889522 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-07 02:50:37.889539 | orchestrator | 2026-04-07 02:50:37.889550 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-07 02:50:37.889562 | orchestrator | Tuesday 07 April 2026 02:50:20 +0000 (0:00:12.370) 0:01:54.513 ********* 2026-04-07 02:50:37.889572 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889583 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 02:50:37.889595 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 02:50:37.889606 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889617 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 02:50:37.889659 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 02:50:37.889672 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889684 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 02:50:37.889695 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 02:50:37.889706 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889717 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 02:50:37.889728 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 02:50:37.889740 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889751 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 02:50:37.889762 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 02:50:37.889771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-07 02:50:37.889781 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 02:50:37.889790 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 02:50:37.889800 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-07 02:50:37.889810 | orchestrator | 2026-04-07 02:50:37.889820 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:50:37.889836 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-07 02:50:37.889848 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-07 02:50:37.889858 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-07 02:50:37.889868 | orchestrator | 2026-04-07 02:50:37.889877 | orchestrator | 2026-04-07 02:50:37.889887 | orchestrator | 2026-04-07 02:50:37.889896 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:50:37.889906 | orchestrator | Tuesday 07 April 2026 02:50:37 +0000 (0:00:17.700) 0:02:12.214 ********* 2026-04-07 02:50:37.889915 | orchestrator | =============================================================================== 2026-04-07 02:50:37.889933 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.38s 2026-04-07 02:50:37.889944 | orchestrator | generate keys ---------------------------------------------------------- 23.84s 2026-04-07 02:50:37.889955 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.70s 2026-04-07 02:50:37.889965 | orchestrator | get keys from monitors ------------------------------------------------- 12.37s 2026-04-07 02:50:37.889976 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.18s 2026-04-07 02:50:37.889987 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.97s 2026-04-07 02:50:37.889998 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.53s 2026-04-07 02:50:37.890008 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.22s 2026-04-07 02:50:37.890113 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 1.20s 2026-04-07 02:50:37.890135 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.17s 2026-04-07 02:50:37.890151 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.97s 2026-04-07 02:50:37.890166 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.97s 2026-04-07 02:50:37.890182 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.94s 2026-04-07 02:50:37.890213 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2026-04-07 02:50:38.334982 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.88s 2026-04-07 02:50:38.335060 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.79s 2026-04-07 02:50:38.335069 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.78s 2026-04-07 02:50:38.335076 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.76s 2026-04-07 02:50:38.335082 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.74s 2026-04-07 02:50:38.335089 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.72s 2026-04-07 02:50:41.084352 | orchestrator | 2026-04-07 02:50:41 | INFO  | Task 0368cdd1-f884-4863-a1b6-01391e4a441b (copy-ceph-keys) was prepared for execution. 2026-04-07 02:50:41.084426 | orchestrator | 2026-04-07 02:50:41 | INFO  | It takes a moment until task 0368cdd1-f884-4863-a1b6-01391e4a441b (copy-ceph-keys) has been started and output is visible here. 2026-04-07 02:51:22.553159 | orchestrator | 2026-04-07 02:51:22.553270 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-07 02:51:22.553288 | orchestrator | 2026-04-07 02:51:22.553306 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-07 02:51:22.553324 | orchestrator | Tuesday 07 April 2026 02:50:45 +0000 (0:00:00.182) 0:00:00.182 ********* 2026-04-07 02:51:22.553340 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-07 02:51:22.553358 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553374 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 02:51:22.553405 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553422 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-07 02:51:22.553440 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-07 02:51:22.553459 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-07 02:51:22.553504 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-07 02:51:22.553522 | orchestrator | 2026-04-07 02:51:22.553537 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-07 02:51:22.553548 | orchestrator | Tuesday 07 April 2026 02:50:50 +0000 (0:00:04.973) 0:00:05.156 ********* 2026-04-07 02:51:22.553557 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-07 02:51:22.553582 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553592 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553602 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 02:51:22.553611 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553621 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-07 02:51:22.553631 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-07 02:51:22.553689 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-07 02:51:22.553699 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-07 02:51:22.553709 | orchestrator | 2026-04-07 02:51:22.553719 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-07 02:51:22.553728 | orchestrator | Tuesday 07 April 2026 02:50:55 +0000 (0:00:04.405) 0:00:09.561 ********* 2026-04-07 02:51:22.553739 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 02:51:22.553749 | orchestrator | 2026-04-07 02:51:22.553759 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-07 02:51:22.553769 | orchestrator | Tuesday 07 April 2026 02:50:56 +0000 (0:00:01.096) 0:00:10.658 ********* 2026-04-07 02:51:22.553779 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-07 02:51:22.553790 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553800 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553811 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 02:51:22.553828 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.553844 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-07 02:51:22.553861 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-07 02:51:22.553876 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-07 02:51:22.553891 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-07 02:51:22.553907 | orchestrator | 2026-04-07 02:51:22.553920 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-07 02:51:22.553937 | orchestrator | Tuesday 07 April 2026 02:51:11 +0000 (0:00:14.974) 0:00:25.632 ********* 2026-04-07 02:51:22.553953 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-07 02:51:22.553970 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-07 02:51:22.553988 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-07 02:51:22.554004 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-07 02:51:22.554120 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-07 02:51:22.554144 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-07 02:51:22.554154 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-07 02:51:22.554163 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-07 02:51:22.554173 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-07 02:51:22.554183 | orchestrator | 2026-04-07 02:51:22.554192 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-07 02:51:22.554202 | orchestrator | Tuesday 07 April 2026 02:51:14 +0000 (0:00:03.401) 0:00:29.034 ********* 2026-04-07 02:51:22.554213 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-07 02:51:22.554222 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.554232 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.554242 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 02:51:22.554251 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-07 02:51:22.554261 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-07 02:51:22.554271 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-07 02:51:22.554280 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-07 02:51:22.554290 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-07 02:51:22.554299 | orchestrator | 2026-04-07 02:51:22.554310 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:51:22.554326 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:51:22.554337 | orchestrator | 2026-04-07 02:51:22.554347 | orchestrator | 2026-04-07 02:51:22.554357 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:51:22.554367 | orchestrator | Tuesday 07 April 2026 02:51:22 +0000 (0:00:07.565) 0:00:36.599 ********* 2026-04-07 02:51:22.554376 | orchestrator | =============================================================================== 2026-04-07 02:51:22.554386 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.97s 2026-04-07 02:51:22.554396 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.57s 2026-04-07 02:51:22.554405 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.97s 2026-04-07 02:51:22.554415 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.41s 2026-04-07 02:51:22.554424 | orchestrator | Check if target directories exist --------------------------------------- 3.40s 2026-04-07 02:51:22.554434 | orchestrator | Create share directory -------------------------------------------------- 1.10s 2026-04-07 02:51:35.223626 | orchestrator | 2026-04-07 02:51:35 | INFO  | Task d3214ea8-9d75-42af-9c75-c204cbf14e46 (cephclient) was prepared for execution. 2026-04-07 02:51:35.223747 | orchestrator | 2026-04-07 02:51:35 | INFO  | It takes a moment until task d3214ea8-9d75-42af-9c75-c204cbf14e46 (cephclient) has been started and output is visible here. 2026-04-07 02:52:39.156093 | orchestrator | 2026-04-07 02:52:39.156176 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-07 02:52:39.156186 | orchestrator | 2026-04-07 02:52:39.156193 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-07 02:52:39.156199 | orchestrator | Tuesday 07 April 2026 02:51:40 +0000 (0:00:00.288) 0:00:00.288 ********* 2026-04-07 02:52:39.156206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-07 02:52:39.156230 | orchestrator | 2026-04-07 02:52:39.156236 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-07 02:52:39.156242 | orchestrator | Tuesday 07 April 2026 02:51:40 +0000 (0:00:00.347) 0:00:00.635 ********* 2026-04-07 02:52:39.156248 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-07 02:52:39.156254 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-07 02:52:39.156261 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-07 02:52:39.156266 | orchestrator | 2026-04-07 02:52:39.156272 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-07 02:52:39.156278 | orchestrator | Tuesday 07 April 2026 02:51:41 +0000 (0:00:01.415) 0:00:02.051 ********* 2026-04-07 02:52:39.156284 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-07 02:52:39.156290 | orchestrator | 2026-04-07 02:52:39.156295 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-07 02:52:39.156301 | orchestrator | Tuesday 07 April 2026 02:51:43 +0000 (0:00:01.690) 0:00:03.742 ********* 2026-04-07 02:52:39.156307 | orchestrator | changed: [testbed-manager] 2026-04-07 02:52:39.156313 | orchestrator | 2026-04-07 02:52:39.156319 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-07 02:52:39.156324 | orchestrator | Tuesday 07 April 2026 02:51:44 +0000 (0:00:01.019) 0:00:04.761 ********* 2026-04-07 02:52:39.156330 | orchestrator | changed: [testbed-manager] 2026-04-07 02:52:39.156335 | orchestrator | 2026-04-07 02:52:39.156341 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-07 02:52:39.156347 | orchestrator | Tuesday 07 April 2026 02:51:45 +0000 (0:00:01.031) 0:00:05.793 ********* 2026-04-07 02:52:39.156352 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-07 02:52:39.156358 | orchestrator | ok: [testbed-manager] 2026-04-07 02:52:39.156364 | orchestrator | 2026-04-07 02:52:39.156369 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-07 02:52:39.156375 | orchestrator | Tuesday 07 April 2026 02:52:28 +0000 (0:00:42.778) 0:00:48.571 ********* 2026-04-07 02:52:39.156381 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-07 02:52:39.156387 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-07 02:52:39.156392 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-07 02:52:39.156398 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-07 02:52:39.156404 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-07 02:52:39.156410 | orchestrator | 2026-04-07 02:52:39.156416 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-07 02:52:39.156421 | orchestrator | Tuesday 07 April 2026 02:52:32 +0000 (0:00:04.410) 0:00:52.982 ********* 2026-04-07 02:52:39.156427 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-07 02:52:39.156433 | orchestrator | 2026-04-07 02:52:39.156438 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-07 02:52:39.156444 | orchestrator | Tuesday 07 April 2026 02:52:33 +0000 (0:00:00.502) 0:00:53.484 ********* 2026-04-07 02:52:39.156450 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:52:39.156455 | orchestrator | 2026-04-07 02:52:39.156461 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-07 02:52:39.156466 | orchestrator | Tuesday 07 April 2026 02:52:33 +0000 (0:00:00.153) 0:00:53.637 ********* 2026-04-07 02:52:39.156472 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:52:39.156478 | orchestrator | 2026-04-07 02:52:39.156483 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-07 02:52:39.156489 | orchestrator | Tuesday 07 April 2026 02:52:33 +0000 (0:00:00.607) 0:00:54.245 ********* 2026-04-07 02:52:39.156505 | orchestrator | changed: [testbed-manager] 2026-04-07 02:52:39.156511 | orchestrator | 2026-04-07 02:52:39.156517 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-07 02:52:39.156530 | orchestrator | Tuesday 07 April 2026 02:52:35 +0000 (0:00:01.564) 0:00:55.809 ********* 2026-04-07 02:52:39.156536 | orchestrator | changed: [testbed-manager] 2026-04-07 02:52:39.156541 | orchestrator | 2026-04-07 02:52:39.156547 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-07 02:52:39.156552 | orchestrator | Tuesday 07 April 2026 02:52:36 +0000 (0:00:00.841) 0:00:56.651 ********* 2026-04-07 02:52:39.156558 | orchestrator | changed: [testbed-manager] 2026-04-07 02:52:39.156564 | orchestrator | 2026-04-07 02:52:39.156569 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-07 02:52:39.156575 | orchestrator | Tuesday 07 April 2026 02:52:37 +0000 (0:00:00.666) 0:00:57.318 ********* 2026-04-07 02:52:39.156580 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-07 02:52:39.156586 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-07 02:52:39.156592 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-07 02:52:39.156598 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-07 02:52:39.156606 | orchestrator | 2026-04-07 02:52:39.156615 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:52:39.156624 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 02:52:39.156633 | orchestrator | 2026-04-07 02:52:39.156641 | orchestrator | 2026-04-07 02:52:39.156720 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:52:39.156731 | orchestrator | Tuesday 07 April 2026 02:52:38 +0000 (0:00:01.675) 0:00:58.994 ********* 2026-04-07 02:52:39.156740 | orchestrator | =============================================================================== 2026-04-07 02:52:39.156748 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.78s 2026-04-07 02:52:39.156758 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.41s 2026-04-07 02:52:39.156767 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.69s 2026-04-07 02:52:39.156776 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.68s 2026-04-07 02:52:39.156785 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.56s 2026-04-07 02:52:39.156794 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.42s 2026-04-07 02:52:39.156800 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.03s 2026-04-07 02:52:39.156806 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2026-04-07 02:52:39.156812 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2026-04-07 02:52:39.156818 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.67s 2026-04-07 02:52:39.156824 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.61s 2026-04-07 02:52:39.156830 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-04-07 02:52:39.156837 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.35s 2026-04-07 02:52:39.156843 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-04-07 02:52:41.830749 | orchestrator | 2026-04-07 02:52:41 | INFO  | Task 31c53c0a-f256-44b0-a728-bc4f4efdeb73 (ceph-bootstrap-dashboard) was prepared for execution. 2026-04-07 02:52:41.830875 | orchestrator | 2026-04-07 02:52:41 | INFO  | It takes a moment until task 31c53c0a-f256-44b0-a728-bc4f4efdeb73 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-04-07 02:54:04.819428 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 02:54:04.819571 | orchestrator | 2.16.14 2026-04-07 02:54:04.819598 | orchestrator | 2026-04-07 02:54:04.819620 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-07 02:54:04.819640 | orchestrator | 2026-04-07 02:54:04.819659 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-07 02:54:04.819741 | orchestrator | Tuesday 07 April 2026 02:52:46 +0000 (0:00:00.314) 0:00:00.314 ********* 2026-04-07 02:54:04.819763 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.819783 | orchestrator | 2026-04-07 02:54:04.819802 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-07 02:54:04.819820 | orchestrator | Tuesday 07 April 2026 02:52:48 +0000 (0:00:01.464) 0:00:01.778 ********* 2026-04-07 02:54:04.819838 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.819856 | orchestrator | 2026-04-07 02:54:04.819874 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-07 02:54:04.819892 | orchestrator | Tuesday 07 April 2026 02:52:49 +0000 (0:00:01.087) 0:00:02.865 ********* 2026-04-07 02:54:04.819910 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.819929 | orchestrator | 2026-04-07 02:54:04.819946 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-07 02:54:04.819964 | orchestrator | Tuesday 07 April 2026 02:52:50 +0000 (0:00:01.192) 0:00:04.058 ********* 2026-04-07 02:54:04.819982 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.820000 | orchestrator | 2026-04-07 02:54:04.820017 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-07 02:54:04.820036 | orchestrator | Tuesday 07 April 2026 02:52:51 +0000 (0:00:01.291) 0:00:05.349 ********* 2026-04-07 02:54:04.820055 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.820075 | orchestrator | 2026-04-07 02:54:04.820094 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-07 02:54:04.820134 | orchestrator | Tuesday 07 April 2026 02:52:52 +0000 (0:00:01.213) 0:00:06.563 ********* 2026-04-07 02:54:04.820175 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.820211 | orchestrator | 2026-04-07 02:54:04.820232 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-07 02:54:04.820252 | orchestrator | Tuesday 07 April 2026 02:52:54 +0000 (0:00:01.113) 0:00:07.676 ********* 2026-04-07 02:54:04.820271 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.820291 | orchestrator | 2026-04-07 02:54:04.820307 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-07 02:54:04.820318 | orchestrator | Tuesday 07 April 2026 02:52:56 +0000 (0:00:02.036) 0:00:09.713 ********* 2026-04-07 02:54:04.820329 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.820339 | orchestrator | 2026-04-07 02:54:04.820350 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-07 02:54:04.820361 | orchestrator | Tuesday 07 April 2026 02:52:57 +0000 (0:00:01.302) 0:00:11.015 ********* 2026-04-07 02:54:04.820372 | orchestrator | changed: [testbed-manager] 2026-04-07 02:54:04.820383 | orchestrator | 2026-04-07 02:54:04.820394 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-07 02:54:04.820405 | orchestrator | Tuesday 07 April 2026 02:53:39 +0000 (0:00:41.816) 0:00:52.831 ********* 2026-04-07 02:54:04.820415 | orchestrator | skipping: [testbed-manager] 2026-04-07 02:54:04.820426 | orchestrator | 2026-04-07 02:54:04.820437 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-07 02:54:04.820449 | orchestrator | 2026-04-07 02:54:04.820459 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-07 02:54:04.820470 | orchestrator | Tuesday 07 April 2026 02:53:39 +0000 (0:00:00.191) 0:00:53.023 ********* 2026-04-07 02:54:04.820481 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:54:04.820492 | orchestrator | 2026-04-07 02:54:04.820503 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-07 02:54:04.820514 | orchestrator | 2026-04-07 02:54:04.820525 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-07 02:54:04.820535 | orchestrator | Tuesday 07 April 2026 02:53:51 +0000 (0:00:12.085) 0:01:05.108 ********* 2026-04-07 02:54:04.820546 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:54:04.820557 | orchestrator | 2026-04-07 02:54:04.820568 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-07 02:54:04.820593 | orchestrator | 2026-04-07 02:54:04.820604 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-07 02:54:04.820615 | orchestrator | Tuesday 07 April 2026 02:54:02 +0000 (0:00:11.353) 0:01:16.462 ********* 2026-04-07 02:54:04.820627 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:54:04.820638 | orchestrator | 2026-04-07 02:54:04.820649 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:54:04.820661 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 02:54:04.820674 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:54:04.820685 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:54:04.820798 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 02:54:04.820810 | orchestrator | 2026-04-07 02:54:04.820821 | orchestrator | 2026-04-07 02:54:04.820832 | orchestrator | 2026-04-07 02:54:04.820843 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:54:04.820867 | orchestrator | Tuesday 07 April 2026 02:54:04 +0000 (0:00:01.417) 0:01:17.880 ********* 2026-04-07 02:54:04.820878 | orchestrator | =============================================================================== 2026-04-07 02:54:04.820889 | orchestrator | Create admin user ------------------------------------------------------ 41.82s 2026-04-07 02:54:04.820938 | orchestrator | Restart ceph manager service ------------------------------------------- 24.86s 2026-04-07 02:54:04.820958 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2026-04-07 02:54:04.820976 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.46s 2026-04-07 02:54:04.820995 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.30s 2026-04-07 02:54:04.821015 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2026-04-07 02:54:04.821035 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.21s 2026-04-07 02:54:04.821047 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.19s 2026-04-07 02:54:04.821058 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2026-04-07 02:54:04.821069 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.09s 2026-04-07 02:54:04.821079 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2026-04-07 02:54:05.247864 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-04-07 02:54:07.450387 | orchestrator | 2026-04-07 02:54:07 | INFO  | Task 4961ace6-f006-4fc5-86c9-ca9e5e46bd23 (keystone) was prepared for execution. 2026-04-07 02:54:07.450492 | orchestrator | 2026-04-07 02:54:07 | INFO  | It takes a moment until task 4961ace6-f006-4fc5-86c9-ca9e5e46bd23 (keystone) has been started and output is visible here. 2026-04-07 02:54:15.457861 | orchestrator | 2026-04-07 02:54:15.457985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:54:15.458006 | orchestrator | 2026-04-07 02:54:15.458087 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:54:15.458124 | orchestrator | Tuesday 07 April 2026 02:54:12 +0000 (0:00:00.310) 0:00:00.311 ********* 2026-04-07 02:54:15.458165 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:54:15.458181 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:54:15.458195 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:54:15.458208 | orchestrator | 2026-04-07 02:54:15.458222 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:54:15.458236 | orchestrator | Tuesday 07 April 2026 02:54:12 +0000 (0:00:00.344) 0:00:00.656 ********* 2026-04-07 02:54:15.458296 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-07 02:54:15.458313 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-07 02:54:15.458330 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-07 02:54:15.458345 | orchestrator | 2026-04-07 02:54:15.458359 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-07 02:54:15.458374 | orchestrator | 2026-04-07 02:54:15.458389 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 02:54:15.458403 | orchestrator | Tuesday 07 April 2026 02:54:13 +0000 (0:00:00.515) 0:00:01.171 ********* 2026-04-07 02:54:15.458419 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:54:15.458434 | orchestrator | 2026-04-07 02:54:15.458449 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-07 02:54:15.458464 | orchestrator | Tuesday 07 April 2026 02:54:13 +0000 (0:00:00.631) 0:00:01.803 ********* 2026-04-07 02:54:15.458488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:15.458509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:15.458563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:15.458597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:15.458618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:15.458633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:15.458648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:15.458663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:15.458679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:15.458750 | orchestrator | 2026-04-07 02:54:15.458768 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-07 02:54:15.458796 | orchestrator | Tuesday 07 April 2026 02:54:15 +0000 (0:00:01.709) 0:00:03.512 ********* 2026-04-07 02:54:21.573334 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:21.573432 | orchestrator | 2026-04-07 02:54:21.573444 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-07 02:54:21.573468 | orchestrator | Tuesday 07 April 2026 02:54:15 +0000 (0:00:00.336) 0:00:03.848 ********* 2026-04-07 02:54:21.573477 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:21.573485 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:21.573493 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:21.573501 | orchestrator | 2026-04-07 02:54:21.573509 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-07 02:54:21.573517 | orchestrator | Tuesday 07 April 2026 02:54:16 +0000 (0:00:00.338) 0:00:04.187 ********* 2026-04-07 02:54:21.573525 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 02:54:21.573533 | orchestrator | 2026-04-07 02:54:21.573541 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 02:54:21.573549 | orchestrator | Tuesday 07 April 2026 02:54:17 +0000 (0:00:00.984) 0:00:05.172 ********* 2026-04-07 02:54:21.573558 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:54:21.573566 | orchestrator | 2026-04-07 02:54:21.573574 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-07 02:54:21.573582 | orchestrator | Tuesday 07 April 2026 02:54:17 +0000 (0:00:00.604) 0:00:05.777 ********* 2026-04-07 02:54:21.573595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:21.573606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:21.573617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:21.573661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:21.573673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:21.573682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:21.573736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:21.573745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:21.573760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:21.573768 | orchestrator | 2026-04-07 02:54:21.573777 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-07 02:54:21.573785 | orchestrator | Tuesday 07 April 2026 02:54:20 +0000 (0:00:03.248) 0:00:09.025 ********* 2026-04-07 02:54:21.573801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:22.415348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:22.415474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:22.415492 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:22.415509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:22.415544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:22.415564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:22.415577 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:22.415609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:22.415623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:22.415634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:22.415654 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:22.415665 | orchestrator | 2026-04-07 02:54:22.415677 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-07 02:54:22.415690 | orchestrator | Tuesday 07 April 2026 02:54:21 +0000 (0:00:00.611) 0:00:09.636 ********* 2026-04-07 02:54:22.415736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:22.415755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:22.415777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:25.720362 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:25.720458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:25.720474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:25.720504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:25.720514 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:25.720542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:25.720558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:25.720588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:25.720603 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:25.720617 | orchestrator | 2026-04-07 02:54:25.720631 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-07 02:54:25.720646 | orchestrator | Tuesday 07 April 2026 02:54:22 +0000 (0:00:00.839) 0:00:10.475 ********* 2026-04-07 02:54:25.720661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:25.720686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:25.720787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:25.720808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:30.609983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:30.611023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:54:30.611062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:30.611067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:30.611081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:30.611085 | orchestrator | 2026-04-07 02:54:30.611091 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-07 02:54:30.611096 | orchestrator | Tuesday 07 April 2026 02:54:25 +0000 (0:00:03.305) 0:00:13.781 ********* 2026-04-07 02:54:30.611116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:30.611121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:30.611130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:30.611135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:30.611142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:30.611150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:34.556147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:34.556263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:34.556273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:54:34.556281 | orchestrator | 2026-04-07 02:54:34.556289 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-07 02:54:34.556297 | orchestrator | Tuesday 07 April 2026 02:54:30 +0000 (0:00:04.889) 0:00:18.670 ********* 2026-04-07 02:54:34.556304 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:54:34.556312 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:54:34.556318 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:54:34.556325 | orchestrator | 2026-04-07 02:54:34.556331 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-07 02:54:34.556335 | orchestrator | Tuesday 07 April 2026 02:54:32 +0000 (0:00:01.534) 0:00:20.204 ********* 2026-04-07 02:54:34.556339 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:34.556343 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:34.556347 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:34.556351 | orchestrator | 2026-04-07 02:54:34.556354 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-07 02:54:34.556358 | orchestrator | Tuesday 07 April 2026 02:54:32 +0000 (0:00:00.847) 0:00:21.051 ********* 2026-04-07 02:54:34.556362 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:34.556366 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:34.556371 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:34.556378 | orchestrator | 2026-04-07 02:54:34.556396 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-07 02:54:34.556403 | orchestrator | Tuesday 07 April 2026 02:54:33 +0000 (0:00:00.595) 0:00:21.647 ********* 2026-04-07 02:54:34.556410 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:34.556417 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:34.556423 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:34.556429 | orchestrator | 2026-04-07 02:54:34.556437 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-07 02:54:34.556442 | orchestrator | Tuesday 07 April 2026 02:54:33 +0000 (0:00:00.330) 0:00:21.978 ********* 2026-04-07 02:54:34.556466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:34.556477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:34.556482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:34.556486 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:34.556491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:34.556498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:34.556502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:34.556515 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:34.556528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-07 02:54:54.074447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 02:54:54.074586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 02:54:54.074604 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:54.074617 | orchestrator | 2026-04-07 02:54:54.074629 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 02:54:54.074641 | orchestrator | Tuesday 07 April 2026 02:54:34 +0000 (0:00:00.636) 0:00:22.615 ********* 2026-04-07 02:54:54.074651 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:54.074661 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:54.074670 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:54.074680 | orchestrator | 2026-04-07 02:54:54.074690 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-07 02:54:54.074744 | orchestrator | Tuesday 07 April 2026 02:54:34 +0000 (0:00:00.349) 0:00:22.964 ********* 2026-04-07 02:54:54.074757 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-07 02:54:54.074768 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-07 02:54:54.074802 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-07 02:54:54.074813 | orchestrator | 2026-04-07 02:54:54.074836 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-07 02:54:54.074847 | orchestrator | Tuesday 07 April 2026 02:54:36 +0000 (0:00:01.904) 0:00:24.868 ********* 2026-04-07 02:54:54.074856 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 02:54:54.074866 | orchestrator | 2026-04-07 02:54:54.074876 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-07 02:54:54.074886 | orchestrator | Tuesday 07 April 2026 02:54:37 +0000 (0:00:00.960) 0:00:25.829 ********* 2026-04-07 02:54:54.074895 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:54:54.074905 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:54:54.074915 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:54:54.074925 | orchestrator | 2026-04-07 02:54:54.074934 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-07 02:54:54.074944 | orchestrator | Tuesday 07 April 2026 02:54:38 +0000 (0:00:00.554) 0:00:26.383 ********* 2026-04-07 02:54:54.074954 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 02:54:54.074963 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 02:54:54.074973 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 02:54:54.074983 | orchestrator | 2026-04-07 02:54:54.074993 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-07 02:54:54.075004 | orchestrator | Tuesday 07 April 2026 02:54:39 +0000 (0:00:01.121) 0:00:27.505 ********* 2026-04-07 02:54:54.075013 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:54:54.075024 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:54:54.075034 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:54:54.075044 | orchestrator | 2026-04-07 02:54:54.075053 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-07 02:54:54.075063 | orchestrator | Tuesday 07 April 2026 02:54:40 +0000 (0:00:00.684) 0:00:28.190 ********* 2026-04-07 02:54:54.075073 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-07 02:54:54.075083 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-07 02:54:54.075093 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-07 02:54:54.075106 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-07 02:54:54.075120 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-07 02:54:54.075136 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-07 02:54:54.075153 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-07 02:54:54.075170 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-07 02:54:54.075207 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-07 02:54:54.075225 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-07 02:54:54.075242 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-07 02:54:54.075259 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-07 02:54:54.075278 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-07 02:54:54.075295 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-07 02:54:54.075314 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-07 02:54:54.075334 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 02:54:54.075363 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 02:54:54.075381 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 02:54:54.075398 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 02:54:54.075414 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 02:54:54.075433 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 02:54:54.075451 | orchestrator | 2026-04-07 02:54:54.075470 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-07 02:54:54.075487 | orchestrator | Tuesday 07 April 2026 02:54:49 +0000 (0:00:08.984) 0:00:37.175 ********* 2026-04-07 02:54:54.075503 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 02:54:54.075520 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 02:54:54.075537 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 02:54:54.075552 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 02:54:54.075570 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 02:54:54.075585 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 02:54:54.075600 | orchestrator | 2026-04-07 02:54:54.075610 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-07 02:54:54.075627 | orchestrator | Tuesday 07 April 2026 02:54:51 +0000 (0:00:02.682) 0:00:39.857 ********* 2026-04-07 02:54:54.075642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:54:54.075665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:56:27.456496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-07 02:56:27.456637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:56:27.456668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:56:27.456679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-07 02:56:27.456688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:56:27.456716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:56:27.456789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-07 02:56:27.456797 | orchestrator | 2026-04-07 02:56:27.456804 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 02:56:27.456811 | orchestrator | Tuesday 07 April 2026 02:54:54 +0000 (0:00:02.274) 0:00:42.132 ********* 2026-04-07 02:56:27.456819 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:56:27.456830 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:56:27.456838 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:56:27.456847 | orchestrator | 2026-04-07 02:56:27.456855 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-07 02:56:27.456863 | orchestrator | Tuesday 07 April 2026 02:54:54 +0000 (0:00:00.605) 0:00:42.737 ********* 2026-04-07 02:56:27.456872 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:56:27.456880 | orchestrator | 2026-04-07 02:56:27.456889 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-07 02:56:27.456897 | orchestrator | Tuesday 07 April 2026 02:54:57 +0000 (0:00:02.340) 0:00:45.078 ********* 2026-04-07 02:56:27.456905 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:56:27.456913 | orchestrator | 2026-04-07 02:56:27.456922 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-07 02:56:27.456930 | orchestrator | Tuesday 07 April 2026 02:54:59 +0000 (0:00:02.175) 0:00:47.254 ********* 2026-04-07 02:56:27.456939 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:56:27.456947 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:56:27.456955 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:56:27.456963 | orchestrator | 2026-04-07 02:56:27.456971 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-07 02:56:27.456980 | orchestrator | Tuesday 07 April 2026 02:55:00 +0000 (0:00:00.881) 0:00:48.136 ********* 2026-04-07 02:56:27.456987 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:56:27.456995 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:56:27.457003 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:56:27.457010 | orchestrator | 2026-04-07 02:56:27.457018 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-07 02:56:27.457037 | orchestrator | Tuesday 07 April 2026 02:55:00 +0000 (0:00:00.440) 0:00:48.576 ********* 2026-04-07 02:56:27.457046 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:56:27.457055 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:56:27.457063 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:56:27.457071 | orchestrator | 2026-04-07 02:56:27.457080 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-07 02:56:27.457089 | orchestrator | Tuesday 07 April 2026 02:55:01 +0000 (0:00:00.652) 0:00:49.228 ********* 2026-04-07 02:56:27.457098 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:56:27.457106 | orchestrator | 2026-04-07 02:56:27.457116 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-07 02:56:27.457124 | orchestrator | Tuesday 07 April 2026 02:55:16 +0000 (0:00:15.049) 0:01:04.277 ********* 2026-04-07 02:56:27.457132 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:56:27.457140 | orchestrator | 2026-04-07 02:56:27.457149 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-07 02:56:27.457157 | orchestrator | Tuesday 07 April 2026 02:55:27 +0000 (0:00:10.799) 0:01:15.077 ********* 2026-04-07 02:56:27.457175 | orchestrator | 2026-04-07 02:56:27.457183 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-07 02:56:27.457192 | orchestrator | Tuesday 07 April 2026 02:55:27 +0000 (0:00:00.068) 0:01:15.145 ********* 2026-04-07 02:56:27.457201 | orchestrator | 2026-04-07 02:56:27.457209 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-07 02:56:27.457218 | orchestrator | Tuesday 07 April 2026 02:55:27 +0000 (0:00:00.074) 0:01:15.220 ********* 2026-04-07 02:56:27.457226 | orchestrator | 2026-04-07 02:56:27.457234 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-07 02:56:27.457243 | orchestrator | Tuesday 07 April 2026 02:55:27 +0000 (0:00:00.076) 0:01:15.296 ********* 2026-04-07 02:56:27.457251 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:56:27.457259 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:56:27.457268 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:56:27.457276 | orchestrator | 2026-04-07 02:56:27.457285 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-07 02:56:27.457294 | orchestrator | Tuesday 07 April 2026 02:56:14 +0000 (0:00:47.033) 0:02:02.330 ********* 2026-04-07 02:56:27.457302 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:56:27.457311 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:56:27.457320 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:56:27.457328 | orchestrator | 2026-04-07 02:56:27.457336 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-07 02:56:27.457345 | orchestrator | Tuesday 07 April 2026 02:56:19 +0000 (0:00:05.528) 0:02:07.858 ********* 2026-04-07 02:56:27.457354 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:56:27.457362 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:56:27.457371 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:56:27.457380 | orchestrator | 2026-04-07 02:56:27.457388 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 02:56:27.457396 | orchestrator | Tuesday 07 April 2026 02:56:26 +0000 (0:00:06.969) 0:02:14.828 ********* 2026-04-07 02:56:27.457417 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:57:20.479027 | orchestrator | 2026-04-07 02:57:20.479109 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-07 02:57:20.479116 | orchestrator | Tuesday 07 April 2026 02:56:27 +0000 (0:00:00.690) 0:02:15.518 ********* 2026-04-07 02:57:20.479121 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:57:20.479127 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:57:20.479131 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:57:20.479135 | orchestrator | 2026-04-07 02:57:20.479139 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-07 02:57:20.479144 | orchestrator | Tuesday 07 April 2026 02:56:28 +0000 (0:00:01.290) 0:02:16.809 ********* 2026-04-07 02:57:20.479148 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:57:20.479152 | orchestrator | 2026-04-07 02:57:20.479156 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-07 02:57:20.479160 | orchestrator | Tuesday 07 April 2026 02:56:30 +0000 (0:00:01.881) 0:02:18.690 ********* 2026-04-07 02:57:20.479164 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-07 02:57:20.479168 | orchestrator | 2026-04-07 02:57:20.479172 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-07 02:57:20.479176 | orchestrator | Tuesday 07 April 2026 02:56:42 +0000 (0:00:12.181) 0:02:30.872 ********* 2026-04-07 02:57:20.479179 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-07 02:57:20.479183 | orchestrator | 2026-04-07 02:57:20.479187 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-07 02:57:20.479191 | orchestrator | Tuesday 07 April 2026 02:57:07 +0000 (0:00:25.082) 0:02:55.954 ********* 2026-04-07 02:57:20.479195 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-07 02:57:20.479214 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-07 02:57:20.479218 | orchestrator | 2026-04-07 02:57:20.479222 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-07 02:57:20.479225 | orchestrator | Tuesday 07 April 2026 02:57:14 +0000 (0:00:07.042) 0:03:02.997 ********* 2026-04-07 02:57:20.479229 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:57:20.479233 | orchestrator | 2026-04-07 02:57:20.479237 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-07 02:57:20.479240 | orchestrator | Tuesday 07 April 2026 02:57:15 +0000 (0:00:00.153) 0:03:03.150 ********* 2026-04-07 02:57:20.479244 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:57:20.479248 | orchestrator | 2026-04-07 02:57:20.479252 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-07 02:57:20.479255 | orchestrator | Tuesday 07 April 2026 02:57:15 +0000 (0:00:00.146) 0:03:03.297 ********* 2026-04-07 02:57:20.479259 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:57:20.479263 | orchestrator | 2026-04-07 02:57:20.479277 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-07 02:57:20.479281 | orchestrator | Tuesday 07 April 2026 02:57:15 +0000 (0:00:00.147) 0:03:03.444 ********* 2026-04-07 02:57:20.479284 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:57:20.479288 | orchestrator | 2026-04-07 02:57:20.479292 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-07 02:57:20.479296 | orchestrator | Tuesday 07 April 2026 02:57:16 +0000 (0:00:00.719) 0:03:04.164 ********* 2026-04-07 02:57:20.479299 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:57:20.479303 | orchestrator | 2026-04-07 02:57:20.479307 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-07 02:57:20.479310 | orchestrator | Tuesday 07 April 2026 02:57:19 +0000 (0:00:03.309) 0:03:07.473 ********* 2026-04-07 02:57:20.479314 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:57:20.479318 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:57:20.479321 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:57:20.479325 | orchestrator | 2026-04-07 02:57:20.479329 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:57:20.479334 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 02:57:20.479340 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 02:57:20.479343 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 02:57:20.479347 | orchestrator | 2026-04-07 02:57:20.479351 | orchestrator | 2026-04-07 02:57:20.479355 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:57:20.479359 | orchestrator | Tuesday 07 April 2026 02:57:19 +0000 (0:00:00.505) 0:03:07.979 ********* 2026-04-07 02:57:20.479363 | orchestrator | =============================================================================== 2026-04-07 02:57:20.479366 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 47.03s 2026-04-07 02:57:20.479370 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.08s 2026-04-07 02:57:20.479374 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.05s 2026-04-07 02:57:20.479378 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.18s 2026-04-07 02:57:20.479383 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.80s 2026-04-07 02:57:20.479389 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.98s 2026-04-07 02:57:20.479396 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.04s 2026-04-07 02:57:20.479403 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.97s 2026-04-07 02:57:20.479415 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.53s 2026-04-07 02:57:20.479433 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.89s 2026-04-07 02:57:20.479440 | orchestrator | keystone : Creating default user role ----------------------------------- 3.31s 2026-04-07 02:57:20.479445 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.31s 2026-04-07 02:57:20.479452 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.25s 2026-04-07 02:57:20.479458 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.68s 2026-04-07 02:57:20.479464 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.34s 2026-04-07 02:57:20.479470 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.27s 2026-04-07 02:57:20.479477 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.18s 2026-04-07 02:57:20.479484 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.90s 2026-04-07 02:57:20.479501 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.88s 2026-04-07 02:57:20.479507 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.71s 2026-04-07 02:57:23.488269 | orchestrator | 2026-04-07 02:57:23 | INFO  | Task 074293a6-9f10-47a2-88bb-93a6c81239bb (placement) was prepared for execution. 2026-04-07 02:57:23.488390 | orchestrator | 2026-04-07 02:57:23 | INFO  | It takes a moment until task 074293a6-9f10-47a2-88bb-93a6c81239bb (placement) has been started and output is visible here. 2026-04-07 02:58:00.718253 | orchestrator | 2026-04-07 02:58:00.718393 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:58:00.718403 | orchestrator | 2026-04-07 02:58:00.718409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:58:00.718414 | orchestrator | Tuesday 07 April 2026 02:57:28 +0000 (0:00:00.301) 0:00:00.301 ********* 2026-04-07 02:58:00.718419 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:58:00.718586 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:58:00.718592 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:58:00.718599 | orchestrator | 2026-04-07 02:58:00.718605 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:58:00.718611 | orchestrator | Tuesday 07 April 2026 02:57:28 +0000 (0:00:00.342) 0:00:00.644 ********* 2026-04-07 02:58:00.718618 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-07 02:58:00.718625 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-07 02:58:00.718632 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-07 02:58:00.718639 | orchestrator | 2026-04-07 02:58:00.718662 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-07 02:58:00.718669 | orchestrator | 2026-04-07 02:58:00.718675 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-07 02:58:00.718681 | orchestrator | Tuesday 07 April 2026 02:57:29 +0000 (0:00:00.489) 0:00:01.133 ********* 2026-04-07 02:58:00.718689 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:58:00.718697 | orchestrator | 2026-04-07 02:58:00.718703 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-07 02:58:00.718708 | orchestrator | Tuesday 07 April 2026 02:57:29 +0000 (0:00:00.639) 0:00:01.773 ********* 2026-04-07 02:58:00.718715 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-07 02:58:00.718721 | orchestrator | 2026-04-07 02:58:00.718727 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-07 02:58:00.718734 | orchestrator | Tuesday 07 April 2026 02:57:33 +0000 (0:00:04.080) 0:00:05.854 ********* 2026-04-07 02:58:00.718742 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-07 02:58:00.718801 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-07 02:58:00.718807 | orchestrator | 2026-04-07 02:58:00.718811 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-07 02:58:00.718815 | orchestrator | Tuesday 07 April 2026 02:57:40 +0000 (0:00:06.610) 0:00:12.464 ********* 2026-04-07 02:58:00.718819 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-07 02:58:00.718823 | orchestrator | 2026-04-07 02:58:00.718827 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-07 02:58:00.718831 | orchestrator | Tuesday 07 April 2026 02:57:44 +0000 (0:00:03.930) 0:00:16.395 ********* 2026-04-07 02:58:00.718835 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 02:58:00.718839 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-07 02:58:00.718843 | orchestrator | 2026-04-07 02:58:00.718847 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-07 02:58:00.718851 | orchestrator | Tuesday 07 April 2026 02:57:48 +0000 (0:00:04.306) 0:00:20.701 ********* 2026-04-07 02:58:00.718855 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 02:58:00.718859 | orchestrator | 2026-04-07 02:58:00.718863 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-07 02:58:00.718866 | orchestrator | Tuesday 07 April 2026 02:57:51 +0000 (0:00:03.251) 0:00:23.953 ********* 2026-04-07 02:58:00.718870 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-07 02:58:00.718874 | orchestrator | 2026-04-07 02:58:00.718878 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-07 02:58:00.718882 | orchestrator | Tuesday 07 April 2026 02:57:56 +0000 (0:00:04.184) 0:00:28.137 ********* 2026-04-07 02:58:00.718886 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:58:00.718890 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:58:00.718894 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:58:00.718898 | orchestrator | 2026-04-07 02:58:00.718902 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-07 02:58:00.718906 | orchestrator | Tuesday 07 April 2026 02:57:56 +0000 (0:00:00.329) 0:00:28.467 ********* 2026-04-07 02:58:00.718916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:00.718946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:00.718961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:00.718968 | orchestrator | 2026-04-07 02:58:00.718974 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-07 02:58:00.718981 | orchestrator | Tuesday 07 April 2026 02:57:57 +0000 (0:00:01.120) 0:00:29.588 ********* 2026-04-07 02:58:00.718988 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:58:00.718994 | orchestrator | 2026-04-07 02:58:00.719001 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-07 02:58:00.719005 | orchestrator | Tuesday 07 April 2026 02:57:57 +0000 (0:00:00.378) 0:00:29.966 ********* 2026-04-07 02:58:00.719009 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:58:00.719013 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:58:00.719017 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:58:00.719020 | orchestrator | 2026-04-07 02:58:00.719024 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-07 02:58:00.719028 | orchestrator | Tuesday 07 April 2026 02:57:58 +0000 (0:00:00.332) 0:00:30.299 ********* 2026-04-07 02:58:00.719032 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 02:58:00.719036 | orchestrator | 2026-04-07 02:58:00.719039 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-07 02:58:00.719043 | orchestrator | Tuesday 07 April 2026 02:57:58 +0000 (0:00:00.626) 0:00:30.925 ********* 2026-04-07 02:58:00.719047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:00.719057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:03.755351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:03.755464 | orchestrator | 2026-04-07 02:58:03.755483 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-07 02:58:03.755497 | orchestrator | Tuesday 07 April 2026 02:58:00 +0000 (0:00:01.763) 0:00:32.689 ********* 2026-04-07 02:58:03.755512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:03.755524 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:58:03.755537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:03.755548 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:58:03.755560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:03.755595 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:58:03.755607 | orchestrator | 2026-04-07 02:58:03.755618 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-07 02:58:03.755648 | orchestrator | Tuesday 07 April 2026 02:58:01 +0000 (0:00:00.580) 0:00:33.269 ********* 2026-04-07 02:58:03.755669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:03.755681 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:58:03.755692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:03.755703 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:58:03.755715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:03.755726 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:58:03.755736 | orchestrator | 2026-04-07 02:58:03.755748 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-07 02:58:03.755786 | orchestrator | Tuesday 07 April 2026 02:58:02 +0000 (0:00:00.769) 0:00:34.039 ********* 2026-04-07 02:58:03.755799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:03.755839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:11.239601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:11.239727 | orchestrator | 2026-04-07 02:58:11.239744 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-07 02:58:11.239786 | orchestrator | Tuesday 07 April 2026 02:58:03 +0000 (0:00:01.688) 0:00:35.728 ********* 2026-04-07 02:58:11.239800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:11.239813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:11.239864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:11.239877 | orchestrator | 2026-04-07 02:58:11.239888 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-07 02:58:11.239899 | orchestrator | Tuesday 07 April 2026 02:58:06 +0000 (0:00:02.483) 0:00:38.211 ********* 2026-04-07 02:58:11.239929 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-07 02:58:11.239942 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-07 02:58:11.239953 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-07 02:58:11.239964 | orchestrator | 2026-04-07 02:58:11.239976 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-07 02:58:11.239986 | orchestrator | Tuesday 07 April 2026 02:58:07 +0000 (0:00:01.470) 0:00:39.681 ********* 2026-04-07 02:58:11.239997 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:58:11.240010 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:58:11.240021 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:58:11.240032 | orchestrator | 2026-04-07 02:58:11.240043 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-07 02:58:11.240054 | orchestrator | Tuesday 07 April 2026 02:58:09 +0000 (0:00:01.511) 0:00:41.193 ********* 2026-04-07 02:58:11.240065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:11.240077 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:58:11.240089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:11.240110 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:58:11.240124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-07 02:58:11.240138 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:58:11.240151 | orchestrator | 2026-04-07 02:58:11.240164 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-07 02:58:11.240183 | orchestrator | Tuesday 07 April 2026 02:58:10 +0000 (0:00:00.872) 0:00:42.065 ********* 2026-04-07 02:58:11.240206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:35.651945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:35.652103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-07 02:58:35.652131 | orchestrator | 2026-04-07 02:58:35.652150 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-07 02:58:35.652168 | orchestrator | Tuesday 07 April 2026 02:58:11 +0000 (0:00:01.153) 0:00:43.218 ********* 2026-04-07 02:58:35.652186 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:58:35.652203 | orchestrator | 2026-04-07 02:58:35.652219 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-07 02:58:35.652234 | orchestrator | Tuesday 07 April 2026 02:58:13 +0000 (0:00:02.154) 0:00:45.372 ********* 2026-04-07 02:58:35.652284 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:58:35.652303 | orchestrator | 2026-04-07 02:58:35.652321 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-07 02:58:35.652339 | orchestrator | Tuesday 07 April 2026 02:58:15 +0000 (0:00:02.318) 0:00:47.690 ********* 2026-04-07 02:58:35.652354 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:58:35.652364 | orchestrator | 2026-04-07 02:58:35.652375 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-07 02:58:35.652386 | orchestrator | Tuesday 07 April 2026 02:58:29 +0000 (0:00:14.209) 0:01:01.899 ********* 2026-04-07 02:58:35.652397 | orchestrator | 2026-04-07 02:58:35.652408 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-07 02:58:35.652418 | orchestrator | Tuesday 07 April 2026 02:58:29 +0000 (0:00:00.070) 0:01:01.970 ********* 2026-04-07 02:58:35.652429 | orchestrator | 2026-04-07 02:58:35.652440 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-07 02:58:35.652451 | orchestrator | Tuesday 07 April 2026 02:58:30 +0000 (0:00:00.081) 0:01:02.051 ********* 2026-04-07 02:58:35.652461 | orchestrator | 2026-04-07 02:58:35.652472 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-07 02:58:35.652484 | orchestrator | Tuesday 07 April 2026 02:58:30 +0000 (0:00:00.094) 0:01:02.146 ********* 2026-04-07 02:58:35.652498 | orchestrator | changed: [testbed-node-0] 2026-04-07 02:58:35.652528 | orchestrator | changed: [testbed-node-1] 2026-04-07 02:58:35.652541 | orchestrator | changed: [testbed-node-2] 2026-04-07 02:58:35.652554 | orchestrator | 2026-04-07 02:58:35.652566 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 02:58:35.652581 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 02:58:35.652607 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 02:58:35.652620 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 02:58:35.652632 | orchestrator | 2026-04-07 02:58:35.652645 | orchestrator | 2026-04-07 02:58:35.652658 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 02:58:35.652670 | orchestrator | Tuesday 07 April 2026 02:58:35 +0000 (0:00:05.013) 0:01:07.159 ********* 2026-04-07 02:58:35.652696 | orchestrator | =============================================================================== 2026-04-07 02:58:35.652708 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.21s 2026-04-07 02:58:35.652742 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.61s 2026-04-07 02:58:35.652755 | orchestrator | placement : Restart placement-api container ----------------------------- 5.01s 2026-04-07 02:58:35.652827 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.31s 2026-04-07 02:58:35.652841 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.18s 2026-04-07 02:58:35.652854 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.08s 2026-04-07 02:58:35.652866 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.93s 2026-04-07 02:58:35.652879 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.25s 2026-04-07 02:58:35.652898 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.48s 2026-04-07 02:58:35.652915 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.32s 2026-04-07 02:58:35.652935 | orchestrator | placement : Creating placement databases -------------------------------- 2.15s 2026-04-07 02:58:35.652954 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.76s 2026-04-07 02:58:35.652973 | orchestrator | placement : Copying over config.json files for services ----------------- 1.69s 2026-04-07 02:58:35.652985 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.51s 2026-04-07 02:58:35.652996 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.47s 2026-04-07 02:58:35.653006 | orchestrator | placement : Check placement containers ---------------------------------- 1.15s 2026-04-07 02:58:35.653017 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.12s 2026-04-07 02:58:35.653028 | orchestrator | placement : Copying over existing policy file --------------------------- 0.87s 2026-04-07 02:58:35.653039 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.77s 2026-04-07 02:58:35.653049 | orchestrator | placement : include_tasks ----------------------------------------------- 0.64s 2026-04-07 02:58:38.274164 | orchestrator | 2026-04-07 02:58:38 | INFO  | Task 75a56f5e-6809-45be-ae58-e0b9f1fc8457 (neutron) was prepared for execution. 2026-04-07 02:58:38.274240 | orchestrator | 2026-04-07 02:58:38 | INFO  | It takes a moment until task 75a56f5e-6809-45be-ae58-e0b9f1fc8457 (neutron) has been started and output is visible here. 2026-04-07 02:59:29.894493 | orchestrator | 2026-04-07 02:59:29.894609 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 02:59:29.894626 | orchestrator | 2026-04-07 02:59:29.894638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 02:59:29.894650 | orchestrator | Tuesday 07 April 2026 02:58:43 +0000 (0:00:00.293) 0:00:00.293 ********* 2026-04-07 02:59:29.894663 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:59:29.894683 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:59:29.894701 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:59:29.894724 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:59:29.894751 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:59:29.894769 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:59:29.894818 | orchestrator | 2026-04-07 02:59:29.894836 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 02:59:29.894853 | orchestrator | Tuesday 07 April 2026 02:58:43 +0000 (0:00:00.784) 0:00:01.077 ********* 2026-04-07 02:59:29.894872 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-07 02:59:29.894891 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-07 02:59:29.894909 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-07 02:59:29.894927 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-07 02:59:29.894945 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-07 02:59:29.894995 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-07 02:59:29.895011 | orchestrator | 2026-04-07 02:59:29.895022 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-07 02:59:29.895035 | orchestrator | 2026-04-07 02:59:29.895047 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-07 02:59:29.895060 | orchestrator | Tuesday 07 April 2026 02:58:44 +0000 (0:00:00.695) 0:00:01.773 ********* 2026-04-07 02:59:29.895089 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:59:29.895104 | orchestrator | 2026-04-07 02:59:29.895117 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-07 02:59:29.895130 | orchestrator | Tuesday 07 April 2026 02:58:45 +0000 (0:00:01.341) 0:00:03.115 ********* 2026-04-07 02:59:29.895142 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:59:29.895156 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:59:29.895168 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:59:29.895180 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:59:29.895193 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:59:29.895206 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:59:29.895219 | orchestrator | 2026-04-07 02:59:29.895232 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-07 02:59:29.895244 | orchestrator | Tuesday 07 April 2026 02:58:47 +0000 (0:00:01.413) 0:00:04.528 ********* 2026-04-07 02:59:29.895258 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:59:29.895270 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:59:29.895285 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:59:29.895297 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:59:29.895308 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:59:29.895321 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:59:29.895333 | orchestrator | 2026-04-07 02:59:29.895345 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-07 02:59:29.895357 | orchestrator | Tuesday 07 April 2026 02:58:48 +0000 (0:00:01.096) 0:00:05.624 ********* 2026-04-07 02:59:29.895370 | orchestrator | ok: [testbed-node-0] => { 2026-04-07 02:59:29.895384 | orchestrator |  "changed": false, 2026-04-07 02:59:29.895396 | orchestrator |  "msg": "All assertions passed" 2026-04-07 02:59:29.895407 | orchestrator | } 2026-04-07 02:59:29.895418 | orchestrator | ok: [testbed-node-1] => { 2026-04-07 02:59:29.895428 | orchestrator |  "changed": false, 2026-04-07 02:59:29.895439 | orchestrator |  "msg": "All assertions passed" 2026-04-07 02:59:29.895450 | orchestrator | } 2026-04-07 02:59:29.895460 | orchestrator | ok: [testbed-node-2] => { 2026-04-07 02:59:29.895471 | orchestrator |  "changed": false, 2026-04-07 02:59:29.895482 | orchestrator |  "msg": "All assertions passed" 2026-04-07 02:59:29.895493 | orchestrator | } 2026-04-07 02:59:29.895503 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 02:59:29.895514 | orchestrator |  "changed": false, 2026-04-07 02:59:29.895525 | orchestrator |  "msg": "All assertions passed" 2026-04-07 02:59:29.895535 | orchestrator | } 2026-04-07 02:59:29.895546 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 02:59:29.895557 | orchestrator |  "changed": false, 2026-04-07 02:59:29.895568 | orchestrator |  "msg": "All assertions passed" 2026-04-07 02:59:29.895579 | orchestrator | } 2026-04-07 02:59:29.895590 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 02:59:29.895601 | orchestrator |  "changed": false, 2026-04-07 02:59:29.895612 | orchestrator |  "msg": "All assertions passed" 2026-04-07 02:59:29.895623 | orchestrator | } 2026-04-07 02:59:29.895634 | orchestrator | 2026-04-07 02:59:29.895645 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-07 02:59:29.895655 | orchestrator | Tuesday 07 April 2026 02:58:49 +0000 (0:00:00.905) 0:00:06.530 ********* 2026-04-07 02:59:29.895666 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:29.895677 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:29.895688 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:29.895708 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:29.895719 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:29.895730 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:29.895740 | orchestrator | 2026-04-07 02:59:29.895751 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-07 02:59:29.895762 | orchestrator | Tuesday 07 April 2026 02:58:49 +0000 (0:00:00.673) 0:00:07.204 ********* 2026-04-07 02:59:29.895773 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-07 02:59:29.895907 | orchestrator | 2026-04-07 02:59:29.895927 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-07 02:59:29.895945 | orchestrator | Tuesday 07 April 2026 02:58:54 +0000 (0:00:04.072) 0:00:11.276 ********* 2026-04-07 02:59:29.895963 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-07 02:59:29.895983 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-07 02:59:29.896000 | orchestrator | 2026-04-07 02:59:29.896059 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-07 02:59:29.896095 | orchestrator | Tuesday 07 April 2026 02:59:00 +0000 (0:00:06.904) 0:00:18.180 ********* 2026-04-07 02:59:29.896116 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 02:59:29.896132 | orchestrator | 2026-04-07 02:59:29.896145 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-07 02:59:29.896165 | orchestrator | Tuesday 07 April 2026 02:59:04 +0000 (0:00:03.484) 0:00:21.665 ********* 2026-04-07 02:59:29.896182 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 02:59:29.896200 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-07 02:59:29.896220 | orchestrator | 2026-04-07 02:59:29.896238 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-07 02:59:29.896257 | orchestrator | Tuesday 07 April 2026 02:59:08 +0000 (0:00:04.000) 0:00:25.665 ********* 2026-04-07 02:59:29.896277 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 02:59:29.896297 | orchestrator | 2026-04-07 02:59:29.896316 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-07 02:59:29.896328 | orchestrator | Tuesday 07 April 2026 02:59:11 +0000 (0:00:03.477) 0:00:29.143 ********* 2026-04-07 02:59:29.896339 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-07 02:59:29.896349 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-07 02:59:29.896360 | orchestrator | 2026-04-07 02:59:29.896372 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-07 02:59:29.896390 | orchestrator | Tuesday 07 April 2026 02:59:20 +0000 (0:00:08.151) 0:00:37.295 ********* 2026-04-07 02:59:29.896406 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:29.896423 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:29.896440 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:29.896458 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:29.896476 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:29.896504 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:29.896522 | orchestrator | 2026-04-07 02:59:29.896541 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-07 02:59:29.896559 | orchestrator | Tuesday 07 April 2026 02:59:20 +0000 (0:00:00.946) 0:00:38.241 ********* 2026-04-07 02:59:29.896577 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:29.896597 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:29.896617 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:29.896636 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:29.896656 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:29.896675 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:29.896693 | orchestrator | 2026-04-07 02:59:29.896711 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-07 02:59:29.896730 | orchestrator | Tuesday 07 April 2026 02:59:23 +0000 (0:00:02.555) 0:00:40.797 ********* 2026-04-07 02:59:29.896765 | orchestrator | ok: [testbed-node-0] 2026-04-07 02:59:29.896811 | orchestrator | ok: [testbed-node-1] 2026-04-07 02:59:29.896829 | orchestrator | ok: [testbed-node-2] 2026-04-07 02:59:29.896846 | orchestrator | ok: [testbed-node-3] 2026-04-07 02:59:29.896865 | orchestrator | ok: [testbed-node-4] 2026-04-07 02:59:29.896882 | orchestrator | ok: [testbed-node-5] 2026-04-07 02:59:29.896901 | orchestrator | 2026-04-07 02:59:29.896919 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-07 02:59:29.896938 | orchestrator | Tuesday 07 April 2026 02:59:24 +0000 (0:00:01.365) 0:00:42.163 ********* 2026-04-07 02:59:29.896957 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:29.896976 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:29.896995 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:29.897013 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:29.897031 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:29.897047 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:29.897058 | orchestrator | 2026-04-07 02:59:29.897069 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-07 02:59:29.897082 | orchestrator | Tuesday 07 April 2026 02:59:27 +0000 (0:00:02.336) 0:00:44.500 ********* 2026-04-07 02:59:29.897110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:29.897165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:35.930523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:35.930626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:35.930637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:35.930643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:35.930650 | orchestrator | 2026-04-07 02:59:35.930657 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-07 02:59:35.930665 | orchestrator | Tuesday 07 April 2026 02:59:29 +0000 (0:00:02.649) 0:00:47.149 ********* 2026-04-07 02:59:35.930673 | orchestrator | [WARNING]: Skipped 2026-04-07 02:59:35.930678 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-07 02:59:35.930683 | orchestrator | due to this access issue: 2026-04-07 02:59:35.930688 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-07 02:59:35.930693 | orchestrator | a directory 2026-04-07 02:59:35.930697 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 02:59:35.930701 | orchestrator | 2026-04-07 02:59:35.930705 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-07 02:59:35.930709 | orchestrator | Tuesday 07 April 2026 02:59:30 +0000 (0:00:00.895) 0:00:48.044 ********* 2026-04-07 02:59:35.930714 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 02:59:35.930719 | orchestrator | 2026-04-07 02:59:35.930723 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-07 02:59:35.930736 | orchestrator | Tuesday 07 April 2026 02:59:32 +0000 (0:00:01.507) 0:00:49.551 ********* 2026-04-07 02:59:35.930744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:35.930753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:35.930757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:35.930761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:35.930769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:41.195897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:41.195989 | orchestrator | 2026-04-07 02:59:41.196000 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-07 02:59:41.196010 | orchestrator | Tuesday 07 April 2026 02:59:35 +0000 (0:00:03.627) 0:00:53.179 ********* 2026-04-07 02:59:41.196019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:41.196027 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:41.196036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:41.196042 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:41.196049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:41.196055 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:41.196094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:41.196102 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:41.196114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:41.196119 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:41.196123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:41.196126 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:41.196130 | orchestrator | 2026-04-07 02:59:41.196134 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-07 02:59:41.196138 | orchestrator | Tuesday 07 April 2026 02:59:38 +0000 (0:00:02.113) 0:00:55.292 ********* 2026-04-07 02:59:41.196142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:41.196146 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:41.196152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:47.269384 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:47.269475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:47.269485 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:47.269492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:47.269497 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:47.269502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:47.269507 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:47.269512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:47.269534 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:47.269539 | orchestrator | 2026-04-07 02:59:47.269544 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-07 02:59:47.269550 | orchestrator | Tuesday 07 April 2026 02:59:41 +0000 (0:00:03.157) 0:00:58.450 ********* 2026-04-07 02:59:47.269554 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:47.269559 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:47.269563 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:47.269568 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:47.269572 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:47.269577 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:47.269581 | orchestrator | 2026-04-07 02:59:47.269586 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-07 02:59:47.269591 | orchestrator | Tuesday 07 April 2026 02:59:43 +0000 (0:00:02.659) 0:01:01.109 ********* 2026-04-07 02:59:47.269595 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:47.269600 | orchestrator | 2026-04-07 02:59:47.269604 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-07 02:59:47.269619 | orchestrator | Tuesday 07 April 2026 02:59:43 +0000 (0:00:00.142) 0:01:01.251 ********* 2026-04-07 02:59:47.269624 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:47.269629 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:47.269633 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:47.269637 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:47.269642 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:47.269646 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:47.269651 | orchestrator | 2026-04-07 02:59:47.269655 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-07 02:59:47.269660 | orchestrator | Tuesday 07 April 2026 02:59:44 +0000 (0:00:00.688) 0:01:01.939 ********* 2026-04-07 02:59:47.269668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:47.269673 | orchestrator | skipping: [testbed-node-0] 2026-04-07 02:59:47.269678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:47.269687 | orchestrator | skipping: [testbed-node-2] 2026-04-07 02:59:47.269692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 02:59:47.269697 | orchestrator | skipping: [testbed-node-1] 2026-04-07 02:59:47.269702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:47.269707 | orchestrator | skipping: [testbed-node-3] 2026-04-07 02:59:47.269718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:55.918515 | orchestrator | skipping: [testbed-node-4] 2026-04-07 02:59:55.918611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 02:59:55.918629 | orchestrator | skipping: [testbed-node-5] 2026-04-07 02:59:55.918641 | orchestrator | 2026-04-07 02:59:55.918652 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-07 02:59:55.918664 | orchestrator | Tuesday 07 April 2026 02:59:47 +0000 (0:00:02.582) 0:01:04.522 ********* 2026-04-07 02:59:55.918676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:55.918716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:55.918729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:55.918768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:55.918776 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:55.918864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 02:59:55.918879 | orchestrator | 2026-04-07 02:59:55.918886 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-07 02:59:55.918892 | orchestrator | Tuesday 07 April 2026 02:59:50 +0000 (0:00:03.170) 0:01:07.693 ********* 2026-04-07 02:59:55.918899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:55.918906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 02:59:55.918924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 03:00:01.208281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 03:00:01.208418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 03:00:01.208436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 03:00:01.208449 | orchestrator | 2026-04-07 03:00:01.208462 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-07 03:00:01.208475 | orchestrator | Tuesday 07 April 2026 02:59:55 +0000 (0:00:05.481) 0:01:13.174 ********* 2026-04-07 03:00:01.208488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:01.208520 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:01.208566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:01.208589 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:01.208601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:01.208612 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:01.208624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:01.208642 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:01.208660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:01.208691 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:01.208719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:01.208738 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:01.208756 | orchestrator | 2026-04-07 03:00:01.208772 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-07 03:00:01.208835 | orchestrator | Tuesday 07 April 2026 02:59:58 +0000 (0:00:02.371) 0:01:15.545 ********* 2026-04-07 03:00:01.208856 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:01.208874 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:01.208893 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:01.208911 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:00:01.208928 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:00:01.208946 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:00:01.208963 | orchestrator | 2026-04-07 03:00:01.208981 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-07 03:00:01.209015 | orchestrator | Tuesday 07 April 2026 03:00:01 +0000 (0:00:02.914) 0:01:18.460 ********* 2026-04-07 03:00:21.808774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:21.808951 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:21.808975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:21.808988 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:21.809001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:21.809012 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:21.809025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 03:00:21.809099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 03:00:21.809114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 03:00:21.809125 | orchestrator | 2026-04-07 03:00:21.809136 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-07 03:00:21.809147 | orchestrator | Tuesday 07 April 2026 03:00:05 +0000 (0:00:04.132) 0:01:22.593 ********* 2026-04-07 03:00:21.809158 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:21.809168 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:21.809178 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:21.809189 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:21.809201 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:21.809211 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:21.809223 | orchestrator | 2026-04-07 03:00:21.809235 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-07 03:00:21.809245 | orchestrator | Tuesday 07 April 2026 03:00:07 +0000 (0:00:02.394) 0:01:24.987 ********* 2026-04-07 03:00:21.809257 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:21.809267 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:21.809278 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:21.809289 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:21.809300 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:21.809313 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:21.809326 | orchestrator | 2026-04-07 03:00:21.809338 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-07 03:00:21.809350 | orchestrator | Tuesday 07 April 2026 03:00:09 +0000 (0:00:02.230) 0:01:27.217 ********* 2026-04-07 03:00:21.809361 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:21.809374 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:21.809387 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:21.809399 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:21.809411 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:21.809422 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:21.809434 | orchestrator | 2026-04-07 03:00:21.809447 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-07 03:00:21.809473 | orchestrator | Tuesday 07 April 2026 03:00:11 +0000 (0:00:02.015) 0:01:29.232 ********* 2026-04-07 03:00:21.809485 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:21.809499 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:21.809512 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:21.809523 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:21.809535 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:21.809547 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:21.809560 | orchestrator | 2026-04-07 03:00:21.809571 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-07 03:00:21.809582 | orchestrator | Tuesday 07 April 2026 03:00:14 +0000 (0:00:02.160) 0:01:31.392 ********* 2026-04-07 03:00:21.809594 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:21.809605 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:21.809617 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:21.809628 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:21.809640 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:21.809651 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:21.809662 | orchestrator | 2026-04-07 03:00:21.809674 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-07 03:00:21.809686 | orchestrator | Tuesday 07 April 2026 03:00:16 +0000 (0:00:02.553) 0:01:33.946 ********* 2026-04-07 03:00:21.809697 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:21.809722 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:21.809735 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:21.809746 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:21.809768 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:21.809780 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:21.809810 | orchestrator | 2026-04-07 03:00:21.809822 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-07 03:00:21.809833 | orchestrator | Tuesday 07 April 2026 03:00:19 +0000 (0:00:02.469) 0:01:36.415 ********* 2026-04-07 03:00:21.809845 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 03:00:21.809857 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:21.809870 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 03:00:21.809883 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:21.809895 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 03:00:21.809909 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:21.809917 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 03:00:21.809924 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:21.809945 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 03:00:26.805088 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:26.805178 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-07 03:00:26.805191 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:26.805199 | orchestrator | 2026-04-07 03:00:26.805208 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-07 03:00:26.805216 | orchestrator | Tuesday 07 April 2026 03:00:21 +0000 (0:00:02.634) 0:01:39.050 ********* 2026-04-07 03:00:26.805227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:26.805256 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:26.805265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:26.805273 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:26.805281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:26.805289 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:26.805309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:26.805317 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:26.805340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:26.805353 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:26.805361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:26.805369 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:26.805376 | orchestrator | 2026-04-07 03:00:26.805383 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-07 03:00:26.805391 | orchestrator | Tuesday 07 April 2026 03:00:24 +0000 (0:00:02.488) 0:01:41.538 ********* 2026-04-07 03:00:26.805399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:26.805406 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:26.805418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:26.805425 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:26.805439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:55.784368 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.784475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:55.784488 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.784495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:55.784501 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.784508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:55.784514 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.784520 | orchestrator | 2026-04-07 03:00:55.784528 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-07 03:00:55.784536 | orchestrator | Tuesday 07 April 2026 03:00:26 +0000 (0:00:02.517) 0:01:44.056 ********* 2026-04-07 03:00:55.784542 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.784548 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.784554 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.784559 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.784566 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.784572 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.784577 | orchestrator | 2026-04-07 03:00:55.784596 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-07 03:00:55.784602 | orchestrator | Tuesday 07 April 2026 03:00:29 +0000 (0:00:02.415) 0:01:46.471 ********* 2026-04-07 03:00:55.784608 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.784614 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.784619 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.784625 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:00:55.784631 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:00:55.784637 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:00:55.784642 | orchestrator | 2026-04-07 03:00:55.784648 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-07 03:00:55.784671 | orchestrator | Tuesday 07 April 2026 03:00:33 +0000 (0:00:04.250) 0:01:50.721 ********* 2026-04-07 03:00:55.784677 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.784683 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.784688 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.784695 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.784700 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.784706 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.784712 | orchestrator | 2026-04-07 03:00:55.784718 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-07 03:00:55.784723 | orchestrator | Tuesday 07 April 2026 03:00:35 +0000 (0:00:02.192) 0:01:52.914 ********* 2026-04-07 03:00:55.784729 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.784735 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.784740 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.784746 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.784752 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.784758 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.784763 | orchestrator | 2026-04-07 03:00:55.784769 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-07 03:00:55.784787 | orchestrator | Tuesday 07 April 2026 03:00:38 +0000 (0:00:02.451) 0:01:55.366 ********* 2026-04-07 03:00:55.784794 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.784868 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.784874 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.784879 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.784885 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.784891 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.784897 | orchestrator | 2026-04-07 03:00:55.784903 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-07 03:00:55.784908 | orchestrator | Tuesday 07 April 2026 03:00:40 +0000 (0:00:02.532) 0:01:57.899 ********* 2026-04-07 03:00:55.784914 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.784920 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.784926 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.784933 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.784943 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.784956 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.784970 | orchestrator | 2026-04-07 03:00:55.784980 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-07 03:00:55.784989 | orchestrator | Tuesday 07 April 2026 03:00:43 +0000 (0:00:02.671) 0:02:00.570 ********* 2026-04-07 03:00:55.784999 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.785008 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.785017 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.785025 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.785034 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.785045 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.785054 | orchestrator | 2026-04-07 03:00:55.785064 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-07 03:00:55.785074 | orchestrator | Tuesday 07 April 2026 03:00:45 +0000 (0:00:02.362) 0:02:02.932 ********* 2026-04-07 03:00:55.785083 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.785093 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.785102 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.785108 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.785114 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.785120 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.785125 | orchestrator | 2026-04-07 03:00:55.785131 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-07 03:00:55.785137 | orchestrator | Tuesday 07 April 2026 03:00:48 +0000 (0:00:02.473) 0:02:05.406 ********* 2026-04-07 03:00:55.785143 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.785157 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.785162 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.785168 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.785174 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.785180 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.785185 | orchestrator | 2026-04-07 03:00:55.785191 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-07 03:00:55.785197 | orchestrator | Tuesday 07 April 2026 03:00:50 +0000 (0:00:02.610) 0:02:08.016 ********* 2026-04-07 03:00:55.785203 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 03:00:55.785209 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.785215 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 03:00:55.785221 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:55.785227 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 03:00:55.785232 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:55.785238 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 03:00:55.785244 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:55.785250 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 03:00:55.785256 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:55.785261 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-07 03:00:55.785273 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:55.785279 | orchestrator | 2026-04-07 03:00:55.785285 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-07 03:00:55.785291 | orchestrator | Tuesday 07 April 2026 03:00:52 +0000 (0:00:02.197) 0:02:10.214 ********* 2026-04-07 03:00:55.785298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:55.785306 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:00:55.785320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:58.602306 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:00:58.602435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-07 03:00:58.602454 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:00:58.602467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:58.602479 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:00:58.602507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:58.602519 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:00:58.602531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 03:00:58.602543 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:00:58.602554 | orchestrator | 2026-04-07 03:00:58.602566 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-07 03:00:58.602579 | orchestrator | Tuesday 07 April 2026 03:00:55 +0000 (0:00:02.826) 0:02:13.040 ********* 2026-04-07 03:00:58.602610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 03:00:58.602631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 03:00:58.602643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 03:00:58.602661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-07 03:00:58.602673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 03:00:58.602701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-07 03:03:19.098554 | orchestrator | 2026-04-07 03:03:19.098679 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-07 03:03:19.098699 | orchestrator | Tuesday 07 April 2026 03:00:58 +0000 (0:00:02.817) 0:02:15.857 ********* 2026-04-07 03:03:19.098714 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:03:19.098728 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:03:19.098742 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:03:19.098757 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:03:19.098771 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:03:19.098786 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:03:19.098800 | orchestrator | 2026-04-07 03:03:19.098815 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-07 03:03:19.098930 | orchestrator | Tuesday 07 April 2026 03:00:59 +0000 (0:00:00.903) 0:02:16.761 ********* 2026-04-07 03:03:19.098947 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:03:19.098961 | orchestrator | 2026-04-07 03:03:19.098976 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-07 03:03:19.098990 | orchestrator | Tuesday 07 April 2026 03:01:01 +0000 (0:00:02.366) 0:02:19.127 ********* 2026-04-07 03:03:19.099003 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:03:19.099017 | orchestrator | 2026-04-07 03:03:19.099032 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-07 03:03:19.099047 | orchestrator | Tuesday 07 April 2026 03:01:04 +0000 (0:00:02.328) 0:02:21.456 ********* 2026-04-07 03:03:19.099062 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:03:19.099077 | orchestrator | 2026-04-07 03:03:19.099092 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 03:03:19.099109 | orchestrator | Tuesday 07 April 2026 03:01:46 +0000 (0:00:42.637) 0:03:04.094 ********* 2026-04-07 03:03:19.099125 | orchestrator | 2026-04-07 03:03:19.099142 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 03:03:19.099159 | orchestrator | Tuesday 07 April 2026 03:01:46 +0000 (0:00:00.078) 0:03:04.172 ********* 2026-04-07 03:03:19.099175 | orchestrator | 2026-04-07 03:03:19.099190 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 03:03:19.099205 | orchestrator | Tuesday 07 April 2026 03:01:46 +0000 (0:00:00.077) 0:03:04.249 ********* 2026-04-07 03:03:19.099219 | orchestrator | 2026-04-07 03:03:19.099233 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 03:03:19.099246 | orchestrator | Tuesday 07 April 2026 03:01:47 +0000 (0:00:00.101) 0:03:04.351 ********* 2026-04-07 03:03:19.099260 | orchestrator | 2026-04-07 03:03:19.099292 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 03:03:19.099307 | orchestrator | Tuesday 07 April 2026 03:01:47 +0000 (0:00:00.075) 0:03:04.426 ********* 2026-04-07 03:03:19.099320 | orchestrator | 2026-04-07 03:03:19.099332 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-07 03:03:19.099345 | orchestrator | Tuesday 07 April 2026 03:01:47 +0000 (0:00:00.110) 0:03:04.536 ********* 2026-04-07 03:03:19.099358 | orchestrator | 2026-04-07 03:03:19.099370 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-07 03:03:19.099383 | orchestrator | Tuesday 07 April 2026 03:01:47 +0000 (0:00:00.144) 0:03:04.681 ********* 2026-04-07 03:03:19.099422 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:03:19.099437 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:03:19.099451 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:03:19.099465 | orchestrator | 2026-04-07 03:03:19.099479 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-07 03:03:19.099493 | orchestrator | Tuesday 07 April 2026 03:02:12 +0000 (0:00:25.021) 0:03:29.703 ********* 2026-04-07 03:03:19.099506 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:03:19.099519 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:03:19.099531 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:03:19.099542 | orchestrator | 2026-04-07 03:03:19.099556 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:03:19.099572 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 03:03:19.099587 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-07 03:03:19.099600 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-07 03:03:19.099614 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 03:03:19.099627 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 03:03:19.099640 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-07 03:03:19.099650 | orchestrator | 2026-04-07 03:03:19.099658 | orchestrator | 2026-04-07 03:03:19.099666 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:03:19.099674 | orchestrator | Tuesday 07 April 2026 03:03:18 +0000 (0:01:06.038) 0:04:35.742 ********* 2026-04-07 03:03:19.099682 | orchestrator | =============================================================================== 2026-04-07 03:03:19.099689 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 66.04s 2026-04-07 03:03:19.099697 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.64s 2026-04-07 03:03:19.099705 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.02s 2026-04-07 03:03:19.099735 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.15s 2026-04-07 03:03:19.099743 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.90s 2026-04-07 03:03:19.099751 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.48s 2026-04-07 03:03:19.099759 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.25s 2026-04-07 03:03:19.099767 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.13s 2026-04-07 03:03:19.099774 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.07s 2026-04-07 03:03:19.099782 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.00s 2026-04-07 03:03:19.099790 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.63s 2026-04-07 03:03:19.099798 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.48s 2026-04-07 03:03:19.099806 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.48s 2026-04-07 03:03:19.099814 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.17s 2026-04-07 03:03:19.099849 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.16s 2026-04-07 03:03:19.099864 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.92s 2026-04-07 03:03:19.099890 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.83s 2026-04-07 03:03:19.099904 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.82s 2026-04-07 03:03:19.099917 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 2.67s 2026-04-07 03:03:19.099927 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 2.66s 2026-04-07 03:03:21.625744 | orchestrator | 2026-04-07 03:03:21 | INFO  | Task a44843ec-cc0a-4e07-b224-fbb6559726e1 (nova) was prepared for execution. 2026-04-07 03:03:21.625944 | orchestrator | 2026-04-07 03:03:21 | INFO  | It takes a moment until task a44843ec-cc0a-4e07-b224-fbb6559726e1 (nova) has been started and output is visible here. 2026-04-07 03:05:27.167059 | orchestrator | 2026-04-07 03:05:27.167177 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:05:27.167188 | orchestrator | 2026-04-07 03:05:27.167196 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-07 03:05:27.167203 | orchestrator | Tuesday 07 April 2026 03:03:26 +0000 (0:00:00.309) 0:00:00.309 ********* 2026-04-07 03:05:27.167210 | orchestrator | changed: [testbed-manager] 2026-04-07 03:05:27.167218 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.167225 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:05:27.167231 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:05:27.167238 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:05:27.167244 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:05:27.167251 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:05:27.167258 | orchestrator | 2026-04-07 03:05:27.167265 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:05:27.167271 | orchestrator | Tuesday 07 April 2026 03:03:27 +0000 (0:00:01.077) 0:00:01.386 ********* 2026-04-07 03:05:27.167278 | orchestrator | changed: [testbed-manager] 2026-04-07 03:05:27.167285 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.167291 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:05:27.167297 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:05:27.167303 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:05:27.167309 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:05:27.167332 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:05:27.167340 | orchestrator | 2026-04-07 03:05:27.167354 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:05:27.167361 | orchestrator | Tuesday 07 April 2026 03:03:28 +0000 (0:00:00.976) 0:00:02.362 ********* 2026-04-07 03:05:27.167368 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-07 03:05:27.167375 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-07 03:05:27.167382 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-07 03:05:27.167388 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-07 03:05:27.167394 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-07 03:05:27.167401 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-07 03:05:27.167408 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-07 03:05:27.167415 | orchestrator | 2026-04-07 03:05:27.167421 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-07 03:05:27.167428 | orchestrator | 2026-04-07 03:05:27.167435 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-07 03:05:27.167442 | orchestrator | Tuesday 07 April 2026 03:03:29 +0000 (0:00:00.785) 0:00:03.148 ********* 2026-04-07 03:05:27.167449 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:05:27.167456 | orchestrator | 2026-04-07 03:05:27.167462 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-07 03:05:27.167469 | orchestrator | Tuesday 07 April 2026 03:03:30 +0000 (0:00:00.812) 0:00:03.960 ********* 2026-04-07 03:05:27.167476 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-07 03:05:27.167503 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-07 03:05:27.167510 | orchestrator | 2026-04-07 03:05:27.167517 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-07 03:05:27.167523 | orchestrator | Tuesday 07 April 2026 03:03:34 +0000 (0:00:04.409) 0:00:08.370 ********* 2026-04-07 03:05:27.167530 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 03:05:27.167536 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 03:05:27.167543 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.167550 | orchestrator | 2026-04-07 03:05:27.167557 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-07 03:05:27.167563 | orchestrator | Tuesday 07 April 2026 03:03:38 +0000 (0:00:04.341) 0:00:12.711 ********* 2026-04-07 03:05:27.167570 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.167577 | orchestrator | 2026-04-07 03:05:27.167584 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-07 03:05:27.167590 | orchestrator | Tuesday 07 April 2026 03:03:39 +0000 (0:00:00.708) 0:00:13.419 ********* 2026-04-07 03:05:27.167597 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.167603 | orchestrator | 2026-04-07 03:05:27.167610 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-07 03:05:27.167617 | orchestrator | Tuesday 07 April 2026 03:03:40 +0000 (0:00:01.363) 0:00:14.782 ********* 2026-04-07 03:05:27.167623 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.167630 | orchestrator | 2026-04-07 03:05:27.167636 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-07 03:05:27.167643 | orchestrator | Tuesday 07 April 2026 03:03:43 +0000 (0:00:02.966) 0:00:17.749 ********* 2026-04-07 03:05:27.167649 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:05:27.167656 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.167662 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.167667 | orchestrator | 2026-04-07 03:05:27.167673 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-07 03:05:27.167680 | orchestrator | Tuesday 07 April 2026 03:03:44 +0000 (0:00:00.325) 0:00:18.075 ********* 2026-04-07 03:05:27.167685 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:05:27.167692 | orchestrator | 2026-04-07 03:05:27.167699 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-07 03:05:27.167706 | orchestrator | Tuesday 07 April 2026 03:04:18 +0000 (0:00:34.417) 0:00:52.492 ********* 2026-04-07 03:05:27.167712 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.167718 | orchestrator | 2026-04-07 03:05:27.167725 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-07 03:05:27.167732 | orchestrator | Tuesday 07 April 2026 03:04:33 +0000 (0:00:15.194) 0:01:07.686 ********* 2026-04-07 03:05:27.167739 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:05:27.167745 | orchestrator | 2026-04-07 03:05:27.167751 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-07 03:05:27.167758 | orchestrator | Tuesday 07 April 2026 03:04:46 +0000 (0:00:12.778) 0:01:20.465 ********* 2026-04-07 03:05:27.167781 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:05:27.167788 | orchestrator | 2026-04-07 03:05:27.167802 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-07 03:05:27.167809 | orchestrator | Tuesday 07 April 2026 03:04:47 +0000 (0:00:00.759) 0:01:21.225 ********* 2026-04-07 03:05:27.167816 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:05:27.167822 | orchestrator | 2026-04-07 03:05:27.167845 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-07 03:05:27.167853 | orchestrator | Tuesday 07 April 2026 03:04:47 +0000 (0:00:00.505) 0:01:21.730 ********* 2026-04-07 03:05:27.167860 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:05:27.167867 | orchestrator | 2026-04-07 03:05:27.167874 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-07 03:05:27.167888 | orchestrator | Tuesday 07 April 2026 03:04:48 +0000 (0:00:00.762) 0:01:22.493 ********* 2026-04-07 03:05:27.167896 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:05:27.167903 | orchestrator | 2026-04-07 03:05:27.167911 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-07 03:05:27.167918 | orchestrator | Tuesday 07 April 2026 03:05:07 +0000 (0:00:18.822) 0:01:41.315 ********* 2026-04-07 03:05:27.167925 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:05:27.167933 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.167940 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.167947 | orchestrator | 2026-04-07 03:05:27.167955 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-07 03:05:27.167961 | orchestrator | 2026-04-07 03:05:27.167969 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-07 03:05:27.167976 | orchestrator | Tuesday 07 April 2026 03:05:07 +0000 (0:00:00.333) 0:01:41.648 ********* 2026-04-07 03:05:27.167983 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:05:27.167989 | orchestrator | 2026-04-07 03:05:27.167997 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-07 03:05:27.168003 | orchestrator | Tuesday 07 April 2026 03:05:08 +0000 (0:00:00.829) 0:01:42.478 ********* 2026-04-07 03:05:27.168010 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168017 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168025 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.168032 | orchestrator | 2026-04-07 03:05:27.168040 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-07 03:05:27.168046 | orchestrator | Tuesday 07 April 2026 03:05:10 +0000 (0:00:02.096) 0:01:44.575 ********* 2026-04-07 03:05:27.168054 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168061 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168068 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.168076 | orchestrator | 2026-04-07 03:05:27.168083 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-07 03:05:27.168090 | orchestrator | Tuesday 07 April 2026 03:05:12 +0000 (0:00:02.159) 0:01:46.734 ********* 2026-04-07 03:05:27.168097 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:05:27.168104 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168111 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168119 | orchestrator | 2026-04-07 03:05:27.168126 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-07 03:05:27.168133 | orchestrator | Tuesday 07 April 2026 03:05:13 +0000 (0:00:00.604) 0:01:47.339 ********* 2026-04-07 03:05:27.168140 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-07 03:05:27.168147 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168155 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-07 03:05:27.168162 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168170 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-07 03:05:27.168178 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-07 03:05:27.168185 | orchestrator | 2026-04-07 03:05:27.168193 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-07 03:05:27.168201 | orchestrator | Tuesday 07 April 2026 03:05:21 +0000 (0:00:07.887) 0:01:55.226 ********* 2026-04-07 03:05:27.168207 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:05:27.168214 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168221 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168227 | orchestrator | 2026-04-07 03:05:27.168234 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-07 03:05:27.168241 | orchestrator | Tuesday 07 April 2026 03:05:21 +0000 (0:00:00.341) 0:01:55.568 ********* 2026-04-07 03:05:27.168248 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-07 03:05:27.168255 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:05:27.168262 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-07 03:05:27.168276 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168284 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-07 03:05:27.168291 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168298 | orchestrator | 2026-04-07 03:05:27.168305 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-07 03:05:27.168311 | orchestrator | Tuesday 07 April 2026 03:05:22 +0000 (0:00:01.222) 0:01:56.790 ********* 2026-04-07 03:05:27.168318 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168326 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168333 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.168341 | orchestrator | 2026-04-07 03:05:27.168348 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-07 03:05:27.168355 | orchestrator | Tuesday 07 April 2026 03:05:23 +0000 (0:00:00.469) 0:01:57.259 ********* 2026-04-07 03:05:27.168362 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168369 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168376 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:05:27.168383 | orchestrator | 2026-04-07 03:05:27.168391 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-07 03:05:27.168398 | orchestrator | Tuesday 07 April 2026 03:05:24 +0000 (0:00:01.014) 0:01:58.274 ********* 2026-04-07 03:05:27.168405 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:05:27.168413 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:05:27.168425 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:06:48.720256 | orchestrator | 2026-04-07 03:06:48.720374 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-07 03:06:48.720392 | orchestrator | Tuesday 07 April 2026 03:05:27 +0000 (0:00:02.715) 0:02:00.990 ********* 2026-04-07 03:06:48.720405 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:48.720418 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:48.720429 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:06:48.720441 | orchestrator | 2026-04-07 03:06:48.720452 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-07 03:06:48.720463 | orchestrator | Tuesday 07 April 2026 03:05:49 +0000 (0:00:22.343) 0:02:23.333 ********* 2026-04-07 03:06:48.720474 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:48.720485 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:48.720497 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:06:48.720508 | orchestrator | 2026-04-07 03:06:48.720519 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-07 03:06:48.720530 | orchestrator | Tuesday 07 April 2026 03:06:02 +0000 (0:00:12.665) 0:02:35.998 ********* 2026-04-07 03:06:48.720540 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:06:48.720551 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:48.720562 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:48.720573 | orchestrator | 2026-04-07 03:06:48.720584 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-07 03:06:48.720595 | orchestrator | Tuesday 07 April 2026 03:06:03 +0000 (0:00:01.191) 0:02:37.190 ********* 2026-04-07 03:06:48.720606 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:48.720617 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:48.720629 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:06:48.720640 | orchestrator | 2026-04-07 03:06:48.720651 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-07 03:06:48.720662 | orchestrator | Tuesday 07 April 2026 03:06:16 +0000 (0:00:13.127) 0:02:50.317 ********* 2026-04-07 03:06:48.720672 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:06:48.720684 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:48.720694 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:48.720705 | orchestrator | 2026-04-07 03:06:48.720716 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-07 03:06:48.720727 | orchestrator | Tuesday 07 April 2026 03:06:17 +0000 (0:00:01.125) 0:02:51.442 ********* 2026-04-07 03:06:48.720764 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:06:48.720782 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:48.720801 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:48.720821 | orchestrator | 2026-04-07 03:06:48.720869 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-07 03:06:48.720890 | orchestrator | 2026-04-07 03:06:48.720909 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-07 03:06:48.720927 | orchestrator | Tuesday 07 April 2026 03:06:17 +0000 (0:00:00.344) 0:02:51.787 ********* 2026-04-07 03:06:48.721093 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:06:48.721132 | orchestrator | 2026-04-07 03:06:48.721150 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-07 03:06:48.721168 | orchestrator | Tuesday 07 April 2026 03:06:18 +0000 (0:00:00.816) 0:02:52.603 ********* 2026-04-07 03:06:48.721186 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-07 03:06:48.721206 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-07 03:06:48.721223 | orchestrator | 2026-04-07 03:06:48.721240 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-07 03:06:48.721258 | orchestrator | Tuesday 07 April 2026 03:06:22 +0000 (0:00:03.378) 0:02:55.981 ********* 2026-04-07 03:06:48.721277 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-07 03:06:48.721298 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-07 03:06:48.721317 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-07 03:06:48.721336 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-07 03:06:48.721356 | orchestrator | 2026-04-07 03:06:48.721375 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-07 03:06:48.721393 | orchestrator | Tuesday 07 April 2026 03:06:28 +0000 (0:00:06.656) 0:03:02.638 ********* 2026-04-07 03:06:48.721412 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:06:48.721429 | orchestrator | 2026-04-07 03:06:48.721449 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-07 03:06:48.721468 | orchestrator | Tuesday 07 April 2026 03:06:32 +0000 (0:00:03.379) 0:03:06.018 ********* 2026-04-07 03:06:48.721486 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:06:48.721504 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-07 03:06:48.721519 | orchestrator | 2026-04-07 03:06:48.721530 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-07 03:06:48.721541 | orchestrator | Tuesday 07 April 2026 03:06:36 +0000 (0:00:03.872) 0:03:09.890 ********* 2026-04-07 03:06:48.721552 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:06:48.721562 | orchestrator | 2026-04-07 03:06:48.721573 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-07 03:06:48.721584 | orchestrator | Tuesday 07 April 2026 03:06:39 +0000 (0:00:03.191) 0:03:13.082 ********* 2026-04-07 03:06:48.721595 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-07 03:06:48.721606 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-07 03:06:48.721617 | orchestrator | 2026-04-07 03:06:48.721628 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-07 03:06:48.721672 | orchestrator | Tuesday 07 April 2026 03:06:47 +0000 (0:00:08.074) 0:03:21.156 ********* 2026-04-07 03:06:48.721691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:48.721726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:48.721741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:48.721767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:06:53.689730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:06:53.689903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:06:53.689933 | orchestrator | 2026-04-07 03:06:53.689954 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-07 03:06:53.689972 | orchestrator | Tuesday 07 April 2026 03:06:48 +0000 (0:00:01.389) 0:03:22.546 ********* 2026-04-07 03:06:53.689988 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:06:53.690004 | orchestrator | 2026-04-07 03:06:53.690083 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-07 03:06:53.690103 | orchestrator | Tuesday 07 April 2026 03:06:48 +0000 (0:00:00.139) 0:03:22.685 ********* 2026-04-07 03:06:53.690122 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:06:53.690139 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:53.690159 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:53.690177 | orchestrator | 2026-04-07 03:06:53.690195 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-07 03:06:53.690213 | orchestrator | Tuesday 07 April 2026 03:06:49 +0000 (0:00:00.322) 0:03:23.007 ********* 2026-04-07 03:06:53.690231 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:06:53.690242 | orchestrator | 2026-04-07 03:06:53.690252 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-07 03:06:53.690261 | orchestrator | Tuesday 07 April 2026 03:06:49 +0000 (0:00:00.731) 0:03:23.739 ********* 2026-04-07 03:06:53.690271 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:06:53.690281 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:53.690290 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:53.690300 | orchestrator | 2026-04-07 03:06:53.690310 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-07 03:06:53.690320 | orchestrator | Tuesday 07 April 2026 03:06:50 +0000 (0:00:00.584) 0:03:24.324 ********* 2026-04-07 03:06:53.690330 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:06:53.690340 | orchestrator | 2026-04-07 03:06:53.690350 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-07 03:06:53.690360 | orchestrator | Tuesday 07 April 2026 03:06:51 +0000 (0:00:00.770) 0:03:25.094 ********* 2026-04-07 03:06:53.690391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:53.690446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:53.690461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:53.690472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:06:53.690483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:06:53.690507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:06:53.690517 | orchestrator | 2026-04-07 03:06:53.690535 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-07 03:06:55.492708 | orchestrator | Tuesday 07 April 2026 03:06:53 +0000 (0:00:02.412) 0:03:27.507 ********* 2026-04-07 03:06:55.492827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:06:55.492910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:06:55.492929 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:06:55.492946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:06:55.492987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:06:55.493019 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:55.493059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:06:55.493078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:06:55.493094 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:55.493108 | orchestrator | 2026-04-07 03:06:55.493124 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-07 03:06:55.493138 | orchestrator | Tuesday 07 April 2026 03:06:54 +0000 (0:00:00.959) 0:03:28.466 ********* 2026-04-07 03:06:55.493153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:06:55.493176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:06:55.493190 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:06:55.493221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:06:57.728301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:06:57.728446 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:06:57.728479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:06:57.728543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:06:57.728569 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:06:57.728588 | orchestrator | 2026-04-07 03:06:57.728607 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-07 03:06:57.728627 | orchestrator | Tuesday 07 April 2026 03:06:55 +0000 (0:00:00.854) 0:03:29.320 ********* 2026-04-07 03:06:57.728676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:57.728727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:57.728743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:06:57.728769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:06:57.728787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:06:57.728808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:07:04.976343 | orchestrator | 2026-04-07 03:07:04.976515 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-07 03:07:04.976536 | orchestrator | Tuesday 07 April 2026 03:06:57 +0000 (0:00:02.234) 0:03:31.555 ********* 2026-04-07 03:07:04.976554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:07:04.976599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:07:04.976633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:07:04.976672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:07:04.976686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:07:04.976708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:07:04.976719 | orchestrator | 2026-04-07 03:07:04.976731 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-07 03:07:04.976743 | orchestrator | Tuesday 07 April 2026 03:07:04 +0000 (0:00:06.605) 0:03:38.161 ********* 2026-04-07 03:07:04.976761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:07:04.976774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:07:04.976786 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:07:04.976809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:07:09.487811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:07:09.487998 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:07:09.488047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-07 03:07:09.488074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:07:09.488084 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:07:09.488093 | orchestrator | 2026-04-07 03:07:09.488103 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-07 03:07:09.488114 | orchestrator | Tuesday 07 April 2026 03:07:04 +0000 (0:00:00.642) 0:03:38.803 ********* 2026-04-07 03:07:09.488123 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:07:09.488131 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:07:09.488140 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:07:09.488149 | orchestrator | 2026-04-07 03:07:09.488157 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-07 03:07:09.488166 | orchestrator | Tuesday 07 April 2026 03:07:06 +0000 (0:00:01.587) 0:03:40.390 ********* 2026-04-07 03:07:09.488175 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:07:09.488184 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:07:09.488192 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:07:09.488201 | orchestrator | 2026-04-07 03:07:09.488210 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-07 03:07:09.488228 | orchestrator | Tuesday 07 April 2026 03:07:06 +0000 (0:00:00.365) 0:03:40.756 ********* 2026-04-07 03:07:09.488257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:07:09.488289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:07:09.488306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-07 03:07:09.488316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:07:09.488336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:07:09.488356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:07:58.982169 | orchestrator | 2026-04-07 03:07:58.982250 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-07 03:07:58.982261 | orchestrator | Tuesday 07 April 2026 03:07:08 +0000 (0:00:02.080) 0:03:42.836 ********* 2026-04-07 03:07:58.982267 | orchestrator | 2026-04-07 03:07:58.982273 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-07 03:07:58.982279 | orchestrator | Tuesday 07 April 2026 03:07:09 +0000 (0:00:00.156) 0:03:42.992 ********* 2026-04-07 03:07:58.982285 | orchestrator | 2026-04-07 03:07:58.982291 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-07 03:07:58.982297 | orchestrator | Tuesday 07 April 2026 03:07:09 +0000 (0:00:00.167) 0:03:43.160 ********* 2026-04-07 03:07:58.982302 | orchestrator | 2026-04-07 03:07:58.982308 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-07 03:07:58.982314 | orchestrator | Tuesday 07 April 2026 03:07:09 +0000 (0:00:00.153) 0:03:43.313 ********* 2026-04-07 03:07:58.982319 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:07:58.982326 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:07:58.982331 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:07:58.982337 | orchestrator | 2026-04-07 03:07:58.982343 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-07 03:07:58.982348 | orchestrator | Tuesday 07 April 2026 03:07:34 +0000 (0:00:24.681) 0:04:07.995 ********* 2026-04-07 03:07:58.982354 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:07:58.982360 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:07:58.982365 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:07:58.982371 | orchestrator | 2026-04-07 03:07:58.982376 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-07 03:07:58.982382 | orchestrator | 2026-04-07 03:07:58.982387 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 03:07:58.982393 | orchestrator | Tuesday 07 April 2026 03:07:46 +0000 (0:00:12.004) 0:04:19.999 ********* 2026-04-07 03:07:58.982400 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:07:58.982407 | orchestrator | 2026-04-07 03:07:58.982412 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 03:07:58.982429 | orchestrator | Tuesday 07 April 2026 03:07:47 +0000 (0:00:01.326) 0:04:21.325 ********* 2026-04-07 03:07:58.982435 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:07:58.982441 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:07:58.982446 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:07:58.982468 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:07:58.982474 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:07:58.982480 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:07:58.982485 | orchestrator | 2026-04-07 03:07:58.982491 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-07 03:07:58.982496 | orchestrator | Tuesday 07 April 2026 03:07:48 +0000 (0:00:00.848) 0:04:22.174 ********* 2026-04-07 03:07:58.982502 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:07:58.982508 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:07:58.982513 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:07:58.982519 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:07:58.982525 | orchestrator | 2026-04-07 03:07:58.982530 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-07 03:07:58.982536 | orchestrator | Tuesday 07 April 2026 03:07:49 +0000 (0:00:01.047) 0:04:23.222 ********* 2026-04-07 03:07:58.982543 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-07 03:07:58.982549 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-07 03:07:58.982554 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-07 03:07:58.982560 | orchestrator | 2026-04-07 03:07:58.982566 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-07 03:07:58.982571 | orchestrator | Tuesday 07 April 2026 03:07:50 +0000 (0:00:00.992) 0:04:24.214 ********* 2026-04-07 03:07:58.982577 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-07 03:07:58.982582 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-07 03:07:58.982588 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-07 03:07:58.982593 | orchestrator | 2026-04-07 03:07:58.982599 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-07 03:07:58.982604 | orchestrator | Tuesday 07 April 2026 03:07:51 +0000 (0:00:01.218) 0:04:25.432 ********* 2026-04-07 03:07:58.982610 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-07 03:07:58.982615 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:07:58.982621 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-07 03:07:58.982626 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:07:58.982632 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-07 03:07:58.982637 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:07:58.982643 | orchestrator | 2026-04-07 03:07:58.982648 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-07 03:07:58.982654 | orchestrator | Tuesday 07 April 2026 03:07:52 +0000 (0:00:00.593) 0:04:26.026 ********* 2026-04-07 03:07:58.982659 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-07 03:07:58.982665 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-07 03:07:58.982682 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 03:07:58.982688 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 03:07:58.982694 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:07:58.982699 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 03:07:58.982705 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-07 03:07:58.982723 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 03:07:58.982731 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-07 03:07:58.982738 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:07:58.982744 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 03:07:58.982751 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 03:07:58.982777 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:07:58.982790 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-07 03:07:58.982797 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-07 03:07:58.982804 | orchestrator | 2026-04-07 03:07:58.982810 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-07 03:07:58.982817 | orchestrator | Tuesday 07 April 2026 03:07:54 +0000 (0:00:01.991) 0:04:28.018 ********* 2026-04-07 03:07:58.982823 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:07:58.982829 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:07:58.982836 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:07:58.982842 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:07:58.982849 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:07:58.982855 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:07:58.982861 | orchestrator | 2026-04-07 03:07:58.982868 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-07 03:07:58.982874 | orchestrator | Tuesday 07 April 2026 03:07:55 +0000 (0:00:01.094) 0:04:29.112 ********* 2026-04-07 03:07:58.982881 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:07:58.982887 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:07:58.982894 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:07:58.982901 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:07:58.982908 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:07:58.982914 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:07:58.982920 | orchestrator | 2026-04-07 03:07:58.982926 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-07 03:07:58.982933 | orchestrator | Tuesday 07 April 2026 03:07:56 +0000 (0:00:01.716) 0:04:30.828 ********* 2026-04-07 03:07:58.982946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:07:58.982957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:07:58.982969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:00.891921 | orchestrator | 2026-04-07 03:08:00.891934 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 03:08:00.891947 | orchestrator | Tuesday 07 April 2026 03:07:59 +0000 (0:00:02.338) 0:04:33.167 ********* 2026-04-07 03:08:00.891961 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:08:00.891975 | orchestrator | 2026-04-07 03:08:00.891986 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-07 03:08:00.892005 | orchestrator | Tuesday 07 April 2026 03:08:00 +0000 (0:00:01.554) 0:04:34.722 ********* 2026-04-07 03:08:04.486345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:04.486682 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:06.078309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:06.078408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:06.078428 | orchestrator | 2026-04-07 03:08:06.078437 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-07 03:08:06.078445 | orchestrator | Tuesday 07 April 2026 03:08:04 +0000 (0:00:03.810) 0:04:38.532 ********* 2026-04-07 03:08:06.078454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:06.078477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:06.078485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:08:06.078492 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:08:06.078515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:06.078523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:06.078529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:08:06.078541 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:08:06.078548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:06.078554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:06.078567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:08:07.978258 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:08:07.978355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:08:07.978368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:08:07.978391 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:08:07.978400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:08:07.978411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:08:07.978422 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:08:07.978432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:08:07.978443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:08:07.978453 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:08:07.978463 | orchestrator | 2026-04-07 03:08:07.978473 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-07 03:08:07.978485 | orchestrator | Tuesday 07 April 2026 03:08:06 +0000 (0:00:01.861) 0:04:40.393 ********* 2026-04-07 03:08:07.978518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:07.978539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:07.978552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:08:07.978564 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:08:07.978575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:07.978586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:07.978610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:08:16.149533 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:08:16.149655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:16.149671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:16.149737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:08:16.149743 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:08:16.149747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:08:16.149752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:08:16.149756 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:08:16.149785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:08:16.149795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:08:16.149800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:08:16.149803 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:08:16.149807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:08:16.149811 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:08:16.149815 | orchestrator | 2026-04-07 03:08:16.149820 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 03:08:16.149825 | orchestrator | Tuesday 07 April 2026 03:08:08 +0000 (0:00:02.143) 0:04:42.537 ********* 2026-04-07 03:08:16.149829 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:08:16.149833 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:08:16.149836 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:08:16.149841 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:08:16.149852 | orchestrator | 2026-04-07 03:08:16.149856 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-07 03:08:16.149860 | orchestrator | Tuesday 07 April 2026 03:08:09 +0000 (0:00:01.171) 0:04:43.708 ********* 2026-04-07 03:08:16.149870 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:08:16.149874 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 03:08:16.149878 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 03:08:16.149882 | orchestrator | 2026-04-07 03:08:16.149886 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-07 03:08:16.149890 | orchestrator | Tuesday 07 April 2026 03:08:11 +0000 (0:00:01.254) 0:04:44.963 ********* 2026-04-07 03:08:16.149894 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:08:16.149898 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 03:08:16.149902 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 03:08:16.149905 | orchestrator | 2026-04-07 03:08:16.149909 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-07 03:08:16.149913 | orchestrator | Tuesday 07 April 2026 03:08:12 +0000 (0:00:01.087) 0:04:46.051 ********* 2026-04-07 03:08:16.149921 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:08:16.149926 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:08:16.149929 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:08:16.149933 | orchestrator | 2026-04-07 03:08:16.149937 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-07 03:08:16.149941 | orchestrator | Tuesday 07 April 2026 03:08:12 +0000 (0:00:00.577) 0:04:46.629 ********* 2026-04-07 03:08:16.149945 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:08:16.149948 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:08:16.149952 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:08:16.149956 | orchestrator | 2026-04-07 03:08:16.149960 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-07 03:08:16.149964 | orchestrator | Tuesday 07 April 2026 03:08:13 +0000 (0:00:00.570) 0:04:47.199 ********* 2026-04-07 03:08:16.149967 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-07 03:08:16.149971 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-07 03:08:16.149975 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-07 03:08:16.149986 | orchestrator | 2026-04-07 03:08:16.149990 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-07 03:08:16.149993 | orchestrator | Tuesday 07 April 2026 03:08:14 +0000 (0:00:01.486) 0:04:48.686 ********* 2026-04-07 03:08:16.150004 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-07 03:08:35.960668 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-07 03:08:35.960762 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-07 03:08:35.960774 | orchestrator | 2026-04-07 03:08:35.960784 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-07 03:08:35.960793 | orchestrator | Tuesday 07 April 2026 03:08:16 +0000 (0:00:01.292) 0:04:49.978 ********* 2026-04-07 03:08:35.960802 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-07 03:08:35.960810 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-07 03:08:35.960818 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-07 03:08:35.960826 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-07 03:08:35.960834 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-07 03:08:35.960842 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-07 03:08:35.960850 | orchestrator | 2026-04-07 03:08:35.960857 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-07 03:08:35.960866 | orchestrator | Tuesday 07 April 2026 03:08:20 +0000 (0:00:03.948) 0:04:53.926 ********* 2026-04-07 03:08:35.960874 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:08:35.960883 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:08:35.960891 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:08:35.960899 | orchestrator | 2026-04-07 03:08:35.960907 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-07 03:08:35.960915 | orchestrator | Tuesday 07 April 2026 03:08:20 +0000 (0:00:00.337) 0:04:54.264 ********* 2026-04-07 03:08:35.960923 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:08:35.960931 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:08:35.960939 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:08:35.960947 | orchestrator | 2026-04-07 03:08:35.960955 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-07 03:08:35.960963 | orchestrator | Tuesday 07 April 2026 03:08:21 +0000 (0:00:00.585) 0:04:54.850 ********* 2026-04-07 03:08:35.960972 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:08:35.960979 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:08:35.960987 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:08:35.960995 | orchestrator | 2026-04-07 03:08:35.961003 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-07 03:08:35.961011 | orchestrator | Tuesday 07 April 2026 03:08:22 +0000 (0:00:01.302) 0:04:56.152 ********* 2026-04-07 03:08:35.961019 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-07 03:08:35.961049 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-07 03:08:35.961057 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-07 03:08:35.961066 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-07 03:08:35.961073 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-07 03:08:35.961081 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-07 03:08:35.961089 | orchestrator | 2026-04-07 03:08:35.961097 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-07 03:08:35.961105 | orchestrator | Tuesday 07 April 2026 03:08:25 +0000 (0:00:03.624) 0:04:59.776 ********* 2026-04-07 03:08:35.961112 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 03:08:35.961120 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 03:08:35.961128 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 03:08:35.961136 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-07 03:08:35.961143 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:08:35.961151 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-07 03:08:35.961159 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:08:35.961169 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-07 03:08:35.961178 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:08:35.961187 | orchestrator | 2026-04-07 03:08:35.961198 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-07 03:08:35.961207 | orchestrator | Tuesday 07 April 2026 03:08:29 +0000 (0:00:03.544) 0:05:03.321 ********* 2026-04-07 03:08:35.961216 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:08:35.961225 | orchestrator | 2026-04-07 03:08:35.961234 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-07 03:08:35.961244 | orchestrator | Tuesday 07 April 2026 03:08:29 +0000 (0:00:00.154) 0:05:03.475 ********* 2026-04-07 03:08:35.961254 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:08:35.961263 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:08:35.961272 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:08:35.961282 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:08:35.961292 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:08:35.961300 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:08:35.961309 | orchestrator | 2026-04-07 03:08:35.961319 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-07 03:08:35.961328 | orchestrator | Tuesday 07 April 2026 03:08:30 +0000 (0:00:00.922) 0:05:04.398 ********* 2026-04-07 03:08:35.961337 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:08:35.961346 | orchestrator | 2026-04-07 03:08:35.961355 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-07 03:08:35.961365 | orchestrator | Tuesday 07 April 2026 03:08:31 +0000 (0:00:00.839) 0:05:05.237 ********* 2026-04-07 03:08:35.961387 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:08:35.961409 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:08:35.961418 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:08:35.961426 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:08:35.961434 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:08:35.961441 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:08:35.961449 | orchestrator | 2026-04-07 03:08:35.961457 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-07 03:08:35.961465 | orchestrator | Tuesday 07 April 2026 03:08:32 +0000 (0:00:00.889) 0:05:06.126 ********* 2026-04-07 03:08:35.961485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:08:35.961497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:08:35.961506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:08:35.961515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:35.961535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:41.948483 | orchestrator | 2026-04-07 03:08:41.948491 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-07 03:08:41.948499 | orchestrator | Tuesday 07 April 2026 03:08:36 +0000 (0:00:04.217) 0:05:10.343 ********* 2026-04-07 03:08:41.948506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:41.948516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:41.948533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:44.357534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:44.357702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:08:44.357722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:08:44.357736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:08:44.357934 | orchestrator | 2026-04-07 03:08:44.357949 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-07 03:08:44.357971 | orchestrator | Tuesday 07 April 2026 03:08:44 +0000 (0:00:07.837) 0:05:18.181 ********* 2026-04-07 03:09:07.881152 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:09:07.881229 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:09:07.881235 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:09:07.881240 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:07.881244 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:07.881248 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:07.881253 | orchestrator | 2026-04-07 03:09:07.881258 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-07 03:09:07.881264 | orchestrator | Tuesday 07 April 2026 03:08:45 +0000 (0:00:01.607) 0:05:19.789 ********* 2026-04-07 03:09:07.881268 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-07 03:09:07.881273 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-07 03:09:07.881277 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-07 03:09:07.881281 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-07 03:09:07.881285 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-07 03:09:07.881289 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-07 03:09:07.881293 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-07 03:09:07.881297 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:07.881301 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-07 03:09:07.881305 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:07.881309 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-07 03:09:07.881313 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:07.881317 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-07 03:09:07.881321 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-07 03:09:07.881338 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-07 03:09:07.881343 | orchestrator | 2026-04-07 03:09:07.881347 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-07 03:09:07.881351 | orchestrator | Tuesday 07 April 2026 03:08:50 +0000 (0:00:04.068) 0:05:23.857 ********* 2026-04-07 03:09:07.881354 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:09:07.881358 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:09:07.881362 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:09:07.881366 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:07.881370 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:07.881373 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:07.881377 | orchestrator | 2026-04-07 03:09:07.881381 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-07 03:09:07.881385 | orchestrator | Tuesday 07 April 2026 03:08:50 +0000 (0:00:00.686) 0:05:24.543 ********* 2026-04-07 03:09:07.881389 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-07 03:09:07.881393 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-07 03:09:07.881397 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-07 03:09:07.881401 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-07 03:09:07.881404 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-07 03:09:07.881408 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-07 03:09:07.881421 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-07 03:09:07.881426 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-07 03:09:07.881429 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-07 03:09:07.881433 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-07 03:09:07.881474 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:07.881479 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-07 03:09:07.881483 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:07.881486 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-07 03:09:07.881490 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:07.881494 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-07 03:09:07.881498 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-07 03:09:07.881511 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-07 03:09:07.881515 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-07 03:09:07.881519 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-07 03:09:07.881522 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-07 03:09:07.881526 | orchestrator | 2026-04-07 03:09:07.881530 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-07 03:09:07.881534 | orchestrator | Tuesday 07 April 2026 03:08:56 +0000 (0:00:05.538) 0:05:30.082 ********* 2026-04-07 03:09:07.881542 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 03:09:07.881546 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 03:09:07.881550 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 03:09:07.881554 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 03:09:07.881558 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 03:09:07.881561 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-07 03:09:07.881566 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-07 03:09:07.881569 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-07 03:09:07.881573 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-07 03:09:07.881577 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 03:09:07.881581 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 03:09:07.881585 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 03:09:07.881588 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-07 03:09:07.881592 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:07.881596 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-07 03:09:07.881600 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:07.881604 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 03:09:07.881615 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-07 03:09:07.881619 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:07.881623 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 03:09:07.881632 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-07 03:09:07.881636 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 03:09:07.881639 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 03:09:07.881643 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-07 03:09:07.881647 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 03:09:07.881651 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 03:09:07.881654 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-07 03:09:07.881658 | orchestrator | 2026-04-07 03:09:07.881665 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-07 03:09:07.881669 | orchestrator | Tuesday 07 April 2026 03:09:03 +0000 (0:00:07.716) 0:05:37.798 ********* 2026-04-07 03:09:07.881673 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:09:07.881676 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:09:07.881680 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:09:07.881684 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:07.881688 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:07.881691 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:07.881695 | orchestrator | 2026-04-07 03:09:07.881699 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-07 03:09:07.881703 | orchestrator | Tuesday 07 April 2026 03:09:04 +0000 (0:00:00.955) 0:05:38.754 ********* 2026-04-07 03:09:07.881706 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:09:07.881713 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:09:07.881768 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:09:07.881774 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:07.881779 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:07.881784 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:07.881788 | orchestrator | 2026-04-07 03:09:07.881793 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-07 03:09:07.881798 | orchestrator | Tuesday 07 April 2026 03:09:05 +0000 (0:00:00.707) 0:05:39.461 ********* 2026-04-07 03:09:07.881802 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:07.881807 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:09:07.881811 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:07.881816 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:07.881821 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:09:07.881825 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:09:07.881830 | orchestrator | 2026-04-07 03:09:07.881838 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-07 03:09:09.124523 | orchestrator | Tuesday 07 April 2026 03:09:07 +0000 (0:00:02.238) 0:05:41.700 ********* 2026-04-07 03:09:09.124617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:09:09.124631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:09:09.124641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:09:09.124649 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:09:09.124673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:09:09.124699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:09:09.124723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:09:09.124753 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:09:09.124761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-07 03:09:09.124767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-07 03:09:09.124773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-07 03:09:09.124790 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:09:09.124798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:09:09.124811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:09:12.889682 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:12.889787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:09:12.889802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:09:12.889812 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:12.889822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-07 03:09:12.889831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:09:12.889861 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:12.889870 | orchestrator | 2026-04-07 03:09:12.889881 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-07 03:09:12.889891 | orchestrator | Tuesday 07 April 2026 03:09:09 +0000 (0:00:01.460) 0:05:43.160 ********* 2026-04-07 03:09:12.889900 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-07 03:09:12.889909 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-07 03:09:12.889931 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:09:12.889940 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-07 03:09:12.889949 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-07 03:09:12.889958 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:09:12.889967 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-07 03:09:12.889975 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-07 03:09:12.889984 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:09:12.889992 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-07 03:09:12.890001 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-07 03:09:12.890010 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:09:12.890069 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-07 03:09:12.890079 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-07 03:09:12.890088 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:09:12.890096 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-07 03:09:12.890105 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-07 03:09:12.890114 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:09:12.890123 | orchestrator | 2026-04-07 03:09:12.890132 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-07 03:09:12.890144 | orchestrator | Tuesday 07 April 2026 03:09:10 +0000 (0:00:01.028) 0:05:44.189 ********* 2026-04-07 03:09:12.890185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:09:12.890210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:09:12.890237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-07 03:09:12.890261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:09:12.890278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:09:12.890304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:10:05.459471 | orchestrator | 2026-04-07 03:10:05.459491 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-07 03:10:05.459511 | orchestrator | Tuesday 07 April 2026 03:09:13 +0000 (0:00:02.723) 0:05:46.913 ********* 2026-04-07 03:10:05.459527 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:10:05.459546 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:10:05.459562 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:10:05.459604 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:10:05.459614 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:10:05.459624 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:10:05.459636 | orchestrator | 2026-04-07 03:10:05.459649 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 03:10:05.459660 | orchestrator | Tuesday 07 April 2026 03:09:13 +0000 (0:00:00.914) 0:05:47.827 ********* 2026-04-07 03:10:05.459671 | orchestrator | 2026-04-07 03:10:05.459683 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 03:10:05.459694 | orchestrator | Tuesday 07 April 2026 03:09:14 +0000 (0:00:00.182) 0:05:48.009 ********* 2026-04-07 03:10:05.459705 | orchestrator | 2026-04-07 03:10:05.459718 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 03:10:05.459737 | orchestrator | Tuesday 07 April 2026 03:09:14 +0000 (0:00:00.159) 0:05:48.169 ********* 2026-04-07 03:10:05.459749 | orchestrator | 2026-04-07 03:10:05.459760 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 03:10:05.459772 | orchestrator | Tuesday 07 April 2026 03:09:14 +0000 (0:00:00.149) 0:05:48.318 ********* 2026-04-07 03:10:05.459783 | orchestrator | 2026-04-07 03:10:05.459795 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 03:10:05.459808 | orchestrator | Tuesday 07 April 2026 03:09:14 +0000 (0:00:00.145) 0:05:48.464 ********* 2026-04-07 03:10:05.459818 | orchestrator | 2026-04-07 03:10:05.459830 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-07 03:10:05.459841 | orchestrator | Tuesday 07 April 2026 03:09:14 +0000 (0:00:00.369) 0:05:48.833 ********* 2026-04-07 03:10:05.459852 | orchestrator | 2026-04-07 03:10:05.459863 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-07 03:10:05.459874 | orchestrator | Tuesday 07 April 2026 03:09:15 +0000 (0:00:00.175) 0:05:49.009 ********* 2026-04-07 03:10:05.459885 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:10:05.459896 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:10:05.459908 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:10:05.459919 | orchestrator | 2026-04-07 03:10:05.459937 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-07 03:10:05.459954 | orchestrator | Tuesday 07 April 2026 03:09:27 +0000 (0:00:12.459) 0:06:01.468 ********* 2026-04-07 03:10:05.459971 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:10:05.459993 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:10:05.460016 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:10:05.460032 | orchestrator | 2026-04-07 03:10:05.460050 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-07 03:10:05.460084 | orchestrator | Tuesday 07 April 2026 03:09:42 +0000 (0:00:14.813) 0:06:16.282 ********* 2026-04-07 03:10:05.460101 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:10:05.460119 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:10:05.460136 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:10:05.460151 | orchestrator | 2026-04-07 03:10:05.460176 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-07 03:12:37.149219 | orchestrator | Tuesday 07 April 2026 03:10:05 +0000 (0:00:22.995) 0:06:39.277 ********* 2026-04-07 03:12:37.149362 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:12:37.149391 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:12:37.149412 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:12:37.149430 | orchestrator | 2026-04-07 03:12:37.149450 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-07 03:12:37.149470 | orchestrator | Tuesday 07 April 2026 03:10:51 +0000 (0:00:46.262) 0:07:25.539 ********* 2026-04-07 03:12:37.149488 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:12:37.149507 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:12:37.149518 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:12:37.149529 | orchestrator | 2026-04-07 03:12:37.149540 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-07 03:12:37.149551 | orchestrator | Tuesday 07 April 2026 03:10:52 +0000 (0:00:00.817) 0:07:26.357 ********* 2026-04-07 03:12:37.149562 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:12:37.149573 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:12:37.149584 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:12:37.149595 | orchestrator | 2026-04-07 03:12:37.149605 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-07 03:12:37.149616 | orchestrator | Tuesday 07 April 2026 03:10:53 +0000 (0:00:00.764) 0:07:27.122 ********* 2026-04-07 03:12:37.149627 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:12:37.149677 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:12:37.149697 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:12:37.149714 | orchestrator | 2026-04-07 03:12:37.149731 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-07 03:12:37.149750 | orchestrator | Tuesday 07 April 2026 03:11:22 +0000 (0:00:28.832) 0:07:55.955 ********* 2026-04-07 03:12:37.149768 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:12:37.149784 | orchestrator | 2026-04-07 03:12:37.149801 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-07 03:12:37.149819 | orchestrator | Tuesday 07 April 2026 03:11:22 +0000 (0:00:00.156) 0:07:56.111 ********* 2026-04-07 03:12:37.149837 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:12:37.149857 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:12:37.149877 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:37.149897 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:37.149916 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:37.149936 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-07 03:12:37.149952 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 03:12:37.149965 | orchestrator | 2026-04-07 03:12:37.149976 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-07 03:12:37.149987 | orchestrator | Tuesday 07 April 2026 03:11:45 +0000 (0:00:23.089) 0:08:19.200 ********* 2026-04-07 03:12:37.149998 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:12:37.150009 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:12:37.150106 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:37.150119 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:37.150129 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:12:37.150141 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:37.150151 | orchestrator | 2026-04-07 03:12:37.150163 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-07 03:12:37.150215 | orchestrator | Tuesday 07 April 2026 03:11:55 +0000 (0:00:10.458) 0:08:29.659 ********* 2026-04-07 03:12:37.150227 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:12:37.150238 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:12:37.150249 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:37.150260 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:37.150272 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:37.150292 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-07 03:12:37.150321 | orchestrator | 2026-04-07 03:12:37.150357 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-07 03:12:37.150376 | orchestrator | Tuesday 07 April 2026 03:12:01 +0000 (0:00:05.906) 0:08:35.565 ********* 2026-04-07 03:12:37.150393 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 03:12:37.150410 | orchestrator | 2026-04-07 03:12:37.150428 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-07 03:12:37.150446 | orchestrator | Tuesday 07 April 2026 03:12:15 +0000 (0:00:13.970) 0:08:49.536 ********* 2026-04-07 03:12:37.150463 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 03:12:37.150481 | orchestrator | 2026-04-07 03:12:37.150499 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-07 03:12:37.150517 | orchestrator | Tuesday 07 April 2026 03:12:17 +0000 (0:00:01.647) 0:08:51.183 ********* 2026-04-07 03:12:37.150536 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:12:37.150554 | orchestrator | 2026-04-07 03:12:37.150573 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-07 03:12:37.150585 | orchestrator | Tuesday 07 April 2026 03:12:19 +0000 (0:00:01.861) 0:08:53.045 ********* 2026-04-07 03:12:37.150596 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 03:12:37.150607 | orchestrator | 2026-04-07 03:12:37.150618 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-07 03:12:37.150629 | orchestrator | Tuesday 07 April 2026 03:12:31 +0000 (0:00:12.016) 0:09:05.061 ********* 2026-04-07 03:12:37.150682 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:12:37.150695 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:12:37.150706 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:12:37.150717 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:37.150727 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:37.150741 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:37.150760 | orchestrator | 2026-04-07 03:12:37.150784 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-07 03:12:37.150809 | orchestrator | 2026-04-07 03:12:37.150828 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-07 03:12:37.150874 | orchestrator | Tuesday 07 April 2026 03:12:33 +0000 (0:00:01.975) 0:09:07.037 ********* 2026-04-07 03:12:37.150894 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:12:37.150912 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:12:37.150931 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:12:37.150950 | orchestrator | 2026-04-07 03:12:37.150969 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-07 03:12:37.150987 | orchestrator | 2026-04-07 03:12:37.151005 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-07 03:12:37.151016 | orchestrator | Tuesday 07 April 2026 03:12:34 +0000 (0:00:00.998) 0:09:08.035 ********* 2026-04-07 03:12:37.151027 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:37.151038 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:37.151049 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:37.151059 | orchestrator | 2026-04-07 03:12:37.151070 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-07 03:12:37.151086 | orchestrator | 2026-04-07 03:12:37.151104 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-07 03:12:37.151122 | orchestrator | Tuesday 07 April 2026 03:12:34 +0000 (0:00:00.800) 0:09:08.835 ********* 2026-04-07 03:12:37.151161 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-07 03:12:37.151180 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-07 03:12:37.151198 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-07 03:12:37.151215 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-07 03:12:37.151235 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-07 03:12:37.151254 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-07 03:12:37.151273 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:12:37.151292 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-07 03:12:37.151310 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-07 03:12:37.151325 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-07 03:12:37.151336 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-07 03:12:37.151346 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-07 03:12:37.151357 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-07 03:12:37.151368 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:12:37.151379 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-07 03:12:37.151390 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-07 03:12:37.151400 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-07 03:12:37.151411 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-07 03:12:37.151422 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-07 03:12:37.151433 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-07 03:12:37.151443 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:12:37.151454 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-07 03:12:37.151465 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-07 03:12:37.151476 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-07 03:12:37.151486 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-07 03:12:37.151497 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-07 03:12:37.151508 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-07 03:12:37.151520 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:37.151539 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-07 03:12:37.151570 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-07 03:12:37.151601 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-07 03:12:37.151619 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-07 03:12:37.151708 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-07 03:12:37.151731 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-07 03:12:37.151749 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:37.151771 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-07 03:12:37.151792 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-07 03:12:37.151809 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-07 03:12:37.151820 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-07 03:12:37.151831 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-07 03:12:37.151842 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-07 03:12:37.151853 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:37.151863 | orchestrator | 2026-04-07 03:12:37.151874 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-07 03:12:37.151885 | orchestrator | 2026-04-07 03:12:37.151896 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-07 03:12:37.151918 | orchestrator | Tuesday 07 April 2026 03:12:36 +0000 (0:00:01.505) 0:09:10.341 ********* 2026-04-07 03:12:37.151929 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-07 03:12:37.151940 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-07 03:12:37.151951 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:37.151962 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-07 03:12:37.151974 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-07 03:12:37.151993 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:37.152011 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-07 03:12:37.152029 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-07 03:12:37.152047 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:37.152065 | orchestrator | 2026-04-07 03:12:37.152095 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-07 03:12:39.034353 | orchestrator | 2026-04-07 03:12:39.034449 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-07 03:12:39.034462 | orchestrator | Tuesday 07 April 2026 03:12:37 +0000 (0:00:00.633) 0:09:10.975 ********* 2026-04-07 03:12:39.034472 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:39.034481 | orchestrator | 2026-04-07 03:12:39.034489 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-07 03:12:39.034497 | orchestrator | 2026-04-07 03:12:39.034505 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-07 03:12:39.034513 | orchestrator | Tuesday 07 April 2026 03:12:38 +0000 (0:00:00.963) 0:09:11.939 ********* 2026-04-07 03:12:39.034521 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:39.034530 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:39.034538 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:39.034546 | orchestrator | 2026-04-07 03:12:39.034553 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:12:39.034562 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:12:39.034573 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-04-07 03:12:39.034581 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-07 03:12:39.034589 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-07 03:12:39.034597 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-07 03:12:39.034605 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-07 03:12:39.034613 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-07 03:12:39.034621 | orchestrator | 2026-04-07 03:12:39.034704 | orchestrator | 2026-04-07 03:12:39.034719 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:12:39.034732 | orchestrator | Tuesday 07 April 2026 03:12:38 +0000 (0:00:00.467) 0:09:12.406 ********* 2026-04-07 03:12:39.034745 | orchestrator | =============================================================================== 2026-04-07 03:12:39.034758 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 46.26s 2026-04-07 03:12:39.034771 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.42s 2026-04-07 03:12:39.034785 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 28.83s 2026-04-07 03:12:39.034830 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.68s 2026-04-07 03:12:39.034845 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.09s 2026-04-07 03:12:39.034858 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.00s 2026-04-07 03:12:39.034869 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.34s 2026-04-07 03:12:39.034899 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.82s 2026-04-07 03:12:39.034915 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.19s 2026-04-07 03:12:39.034929 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.81s 2026-04-07 03:12:39.034942 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.97s 2026-04-07 03:12:39.034956 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.13s 2026-04-07 03:12:39.034967 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.78s 2026-04-07 03:12:39.034976 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.67s 2026-04-07 03:12:39.034986 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.46s 2026-04-07 03:12:39.034995 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.02s 2026-04-07 03:12:39.035004 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.00s 2026-04-07 03:12:39.035017 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.46s 2026-04-07 03:12:39.035031 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.07s 2026-04-07 03:12:39.035044 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.89s 2026-04-07 03:12:41.758699 | orchestrator | 2026-04-07 03:12:41 | INFO  | Task b06025f9-dddb-4fe8-8c4b-b668ebf6b738 (horizon) was prepared for execution. 2026-04-07 03:12:41.758790 | orchestrator | 2026-04-07 03:12:41 | INFO  | It takes a moment until task b06025f9-dddb-4fe8-8c4b-b668ebf6b738 (horizon) has been started and output is visible here. 2026-04-07 03:12:49.715106 | orchestrator | 2026-04-07 03:12:49.715207 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:12:49.715221 | orchestrator | 2026-04-07 03:12:49.715231 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:12:49.715241 | orchestrator | Tuesday 07 April 2026 03:12:46 +0000 (0:00:00.298) 0:00:00.298 ********* 2026-04-07 03:12:49.715250 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:49.715260 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:49.715269 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:49.715278 | orchestrator | 2026-04-07 03:12:49.715287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:12:49.715296 | orchestrator | Tuesday 07 April 2026 03:12:46 +0000 (0:00:00.367) 0:00:00.666 ********* 2026-04-07 03:12:49.715326 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-07 03:12:49.715342 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-07 03:12:49.715356 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-07 03:12:49.715369 | orchestrator | 2026-04-07 03:12:49.715384 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-07 03:12:49.715399 | orchestrator | 2026-04-07 03:12:49.715413 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 03:12:49.715427 | orchestrator | Tuesday 07 April 2026 03:12:47 +0000 (0:00:00.493) 0:00:01.159 ********* 2026-04-07 03:12:49.715442 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:12:49.715457 | orchestrator | 2026-04-07 03:12:49.715472 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-07 03:12:49.715487 | orchestrator | Tuesday 07 April 2026 03:12:47 +0000 (0:00:00.571) 0:00:01.731 ********* 2026-04-07 03:12:49.715554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:12:49.715654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:12:49.715698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:12:49.715716 | orchestrator | 2026-04-07 03:12:49.715731 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-07 03:12:49.715746 | orchestrator | Tuesday 07 April 2026 03:12:49 +0000 (0:00:01.258) 0:00:02.989 ********* 2026-04-07 03:12:49.715761 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:49.715775 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:49.715789 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:49.715803 | orchestrator | 2026-04-07 03:12:49.715819 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 03:12:49.715833 | orchestrator | Tuesday 07 April 2026 03:12:49 +0000 (0:00:00.502) 0:00:03.491 ********* 2026-04-07 03:12:49.715856 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-07 03:12:56.315539 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-07 03:12:56.315753 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-07 03:12:56.315771 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-07 03:12:56.315783 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-07 03:12:56.315794 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-07 03:12:56.315805 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-07 03:12:56.315816 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-07 03:12:56.315851 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-07 03:12:56.315862 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-07 03:12:56.315873 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-07 03:12:56.315884 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-07 03:12:56.315895 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-07 03:12:56.315906 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-07 03:12:56.315917 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-07 03:12:56.315928 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-07 03:12:56.315938 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-07 03:12:56.315949 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-07 03:12:56.315960 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-07 03:12:56.315970 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-07 03:12:56.315981 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-07 03:12:56.315992 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-07 03:12:56.316002 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-07 03:12:56.316013 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-07 03:12:56.316026 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-07 03:12:56.316039 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-07 03:12:56.316050 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-07 03:12:56.316063 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-07 03:12:56.316091 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-07 03:12:56.316104 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-07 03:12:56.316117 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-07 03:12:56.316129 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-07 03:12:56.316143 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-07 03:12:56.316157 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-07 03:12:56.316167 | orchestrator | 2026-04-07 03:12:56.316179 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:12:56.316191 | orchestrator | Tuesday 07 April 2026 03:12:50 +0000 (0:00:00.814) 0:00:04.306 ********* 2026-04-07 03:12:56.316201 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:56.316227 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:56.316246 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:56.316264 | orchestrator | 2026-04-07 03:12:56.316282 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:12:56.316300 | orchestrator | Tuesday 07 April 2026 03:12:50 +0000 (0:00:00.352) 0:00:04.659 ********* 2026-04-07 03:12:56.316318 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.316336 | orchestrator | 2026-04-07 03:12:56.316380 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:12:56.316402 | orchestrator | Tuesday 07 April 2026 03:12:51 +0000 (0:00:00.375) 0:00:05.034 ********* 2026-04-07 03:12:56.316420 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.316437 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:56.316449 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:56.316466 | orchestrator | 2026-04-07 03:12:56.316485 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:12:56.316504 | orchestrator | Tuesday 07 April 2026 03:12:51 +0000 (0:00:00.336) 0:00:05.371 ********* 2026-04-07 03:12:56.316523 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:56.316543 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:56.316562 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:56.316609 | orchestrator | 2026-04-07 03:12:56.316625 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:12:56.316636 | orchestrator | Tuesday 07 April 2026 03:12:51 +0000 (0:00:00.355) 0:00:05.726 ********* 2026-04-07 03:12:56.316647 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.316658 | orchestrator | 2026-04-07 03:12:56.316669 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:12:56.316679 | orchestrator | Tuesday 07 April 2026 03:12:51 +0000 (0:00:00.138) 0:00:05.865 ********* 2026-04-07 03:12:56.316690 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.316702 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:56.316713 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:56.316723 | orchestrator | 2026-04-07 03:12:56.316734 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:12:56.316745 | orchestrator | Tuesday 07 April 2026 03:12:52 +0000 (0:00:00.316) 0:00:06.182 ********* 2026-04-07 03:12:56.316755 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:56.316766 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:56.316782 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:56.316800 | orchestrator | 2026-04-07 03:12:56.316817 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:12:56.316835 | orchestrator | Tuesday 07 April 2026 03:12:52 +0000 (0:00:00.593) 0:00:06.775 ********* 2026-04-07 03:12:56.316853 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.316869 | orchestrator | 2026-04-07 03:12:56.316888 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:12:56.316905 | orchestrator | Tuesday 07 April 2026 03:12:53 +0000 (0:00:00.168) 0:00:06.944 ********* 2026-04-07 03:12:56.316922 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.316939 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:56.316957 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:56.316976 | orchestrator | 2026-04-07 03:12:56.316992 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:12:56.317007 | orchestrator | Tuesday 07 April 2026 03:12:53 +0000 (0:00:00.350) 0:00:07.295 ********* 2026-04-07 03:12:56.317024 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:56.317041 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:56.317057 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:56.317075 | orchestrator | 2026-04-07 03:12:56.317092 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:12:56.317107 | orchestrator | Tuesday 07 April 2026 03:12:53 +0000 (0:00:00.329) 0:00:07.625 ********* 2026-04-07 03:12:56.317125 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.317144 | orchestrator | 2026-04-07 03:12:56.317178 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:12:56.317196 | orchestrator | Tuesday 07 April 2026 03:12:53 +0000 (0:00:00.138) 0:00:07.763 ********* 2026-04-07 03:12:56.317213 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.317231 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:56.317249 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:56.317265 | orchestrator | 2026-04-07 03:12:56.317282 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:12:56.317298 | orchestrator | Tuesday 07 April 2026 03:12:54 +0000 (0:00:00.567) 0:00:08.330 ********* 2026-04-07 03:12:56.317315 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:56.317334 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:56.317363 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:56.317381 | orchestrator | 2026-04-07 03:12:56.317399 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:12:56.317418 | orchestrator | Tuesday 07 April 2026 03:12:54 +0000 (0:00:00.340) 0:00:08.670 ********* 2026-04-07 03:12:56.317437 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.317456 | orchestrator | 2026-04-07 03:12:56.317474 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:12:56.317492 | orchestrator | Tuesday 07 April 2026 03:12:54 +0000 (0:00:00.144) 0:00:08.815 ********* 2026-04-07 03:12:56.317504 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.317515 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:56.317525 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:56.317536 | orchestrator | 2026-04-07 03:12:56.317547 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:12:56.317558 | orchestrator | Tuesday 07 April 2026 03:12:55 +0000 (0:00:00.344) 0:00:09.159 ********* 2026-04-07 03:12:56.317618 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:12:56.317638 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:12:56.317656 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:12:56.317673 | orchestrator | 2026-04-07 03:12:56.317690 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:12:56.317706 | orchestrator | Tuesday 07 April 2026 03:12:55 +0000 (0:00:00.330) 0:00:09.490 ********* 2026-04-07 03:12:56.317723 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.317739 | orchestrator | 2026-04-07 03:12:56.317755 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:12:56.317770 | orchestrator | Tuesday 07 April 2026 03:12:55 +0000 (0:00:00.394) 0:00:09.885 ********* 2026-04-07 03:12:56.317787 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:12:56.317803 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:12:56.317821 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:12:56.317838 | orchestrator | 2026-04-07 03:12:56.317856 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:12:56.317891 | orchestrator | Tuesday 07 April 2026 03:12:56 +0000 (0:00:00.324) 0:00:10.210 ********* 2026-04-07 03:13:11.418345 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:13:11.418502 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:13:11.418598 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:13:11.418619 | orchestrator | 2026-04-07 03:13:11.418637 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:13:11.418665 | orchestrator | Tuesday 07 April 2026 03:12:56 +0000 (0:00:00.325) 0:00:10.536 ********* 2026-04-07 03:13:11.418681 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.418698 | orchestrator | 2026-04-07 03:13:11.418708 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:13:11.418717 | orchestrator | Tuesday 07 April 2026 03:12:56 +0000 (0:00:00.166) 0:00:10.702 ********* 2026-04-07 03:13:11.418726 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.418735 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:13:11.418743 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:13:11.418752 | orchestrator | 2026-04-07 03:13:11.418761 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:13:11.418793 | orchestrator | Tuesday 07 April 2026 03:12:57 +0000 (0:00:00.334) 0:00:11.036 ********* 2026-04-07 03:13:11.418803 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:13:11.418811 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:13:11.418820 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:13:11.418828 | orchestrator | 2026-04-07 03:13:11.418837 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:13:11.418846 | orchestrator | Tuesday 07 April 2026 03:12:57 +0000 (0:00:00.569) 0:00:11.606 ********* 2026-04-07 03:13:11.418855 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.418863 | orchestrator | 2026-04-07 03:13:11.418872 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:13:11.418880 | orchestrator | Tuesday 07 April 2026 03:12:57 +0000 (0:00:00.138) 0:00:11.744 ********* 2026-04-07 03:13:11.418890 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.418901 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:13:11.418911 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:13:11.418921 | orchestrator | 2026-04-07 03:13:11.418931 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:13:11.418940 | orchestrator | Tuesday 07 April 2026 03:12:58 +0000 (0:00:00.343) 0:00:12.088 ********* 2026-04-07 03:13:11.418950 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:13:11.418961 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:13:11.418970 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:13:11.418980 | orchestrator | 2026-04-07 03:13:11.418990 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:13:11.419000 | orchestrator | Tuesday 07 April 2026 03:12:58 +0000 (0:00:00.363) 0:00:12.452 ********* 2026-04-07 03:13:11.419010 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.419020 | orchestrator | 2026-04-07 03:13:11.419030 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:13:11.419040 | orchestrator | Tuesday 07 April 2026 03:12:58 +0000 (0:00:00.165) 0:00:12.617 ********* 2026-04-07 03:13:11.419049 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.419059 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:13:11.419069 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:13:11.419078 | orchestrator | 2026-04-07 03:13:11.419089 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-07 03:13:11.419099 | orchestrator | Tuesday 07 April 2026 03:12:59 +0000 (0:00:00.561) 0:00:13.178 ********* 2026-04-07 03:13:11.419109 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:13:11.419120 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:13:11.419130 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:13:11.419140 | orchestrator | 2026-04-07 03:13:11.419151 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-07 03:13:11.419161 | orchestrator | Tuesday 07 April 2026 03:12:59 +0000 (0:00:00.342) 0:00:13.521 ********* 2026-04-07 03:13:11.419171 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.419181 | orchestrator | 2026-04-07 03:13:11.419191 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-07 03:13:11.419200 | orchestrator | Tuesday 07 April 2026 03:12:59 +0000 (0:00:00.140) 0:00:13.662 ********* 2026-04-07 03:13:11.419222 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.419231 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:13:11.419240 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:13:11.419250 | orchestrator | 2026-04-07 03:13:11.419265 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-07 03:13:11.419280 | orchestrator | Tuesday 07 April 2026 03:13:00 +0000 (0:00:00.320) 0:00:13.982 ********* 2026-04-07 03:13:11.419295 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:13:11.419310 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:13:11.419324 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:13:11.419336 | orchestrator | 2026-04-07 03:13:11.419350 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-07 03:13:11.419379 | orchestrator | Tuesday 07 April 2026 03:13:01 +0000 (0:00:01.892) 0:00:15.874 ********* 2026-04-07 03:13:11.419395 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-07 03:13:11.419411 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-07 03:13:11.419425 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-07 03:13:11.419436 | orchestrator | 2026-04-07 03:13:11.419458 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-07 03:13:11.419467 | orchestrator | Tuesday 07 April 2026 03:13:03 +0000 (0:00:01.903) 0:00:17.778 ********* 2026-04-07 03:13:11.419476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-07 03:13:11.419486 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-07 03:13:11.419495 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-07 03:13:11.419503 | orchestrator | 2026-04-07 03:13:11.419512 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-07 03:13:11.419564 | orchestrator | Tuesday 07 April 2026 03:13:05 +0000 (0:00:01.859) 0:00:19.637 ********* 2026-04-07 03:13:11.419580 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-07 03:13:11.419595 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-07 03:13:11.419610 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-07 03:13:11.419625 | orchestrator | 2026-04-07 03:13:11.419639 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-07 03:13:11.419655 | orchestrator | Tuesday 07 April 2026 03:13:07 +0000 (0:00:01.631) 0:00:21.269 ********* 2026-04-07 03:13:11.419664 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.419673 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:13:11.419681 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:13:11.419690 | orchestrator | 2026-04-07 03:13:11.419698 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-07 03:13:11.419707 | orchestrator | Tuesday 07 April 2026 03:13:07 +0000 (0:00:00.536) 0:00:21.805 ********* 2026-04-07 03:13:11.419715 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:11.419724 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:13:11.419732 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:13:11.419741 | orchestrator | 2026-04-07 03:13:11.419749 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 03:13:11.419758 | orchestrator | Tuesday 07 April 2026 03:13:08 +0000 (0:00:00.334) 0:00:22.140 ********* 2026-04-07 03:13:11.419767 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:13:11.419776 | orchestrator | 2026-04-07 03:13:11.419785 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-07 03:13:11.419793 | orchestrator | Tuesday 07 April 2026 03:13:09 +0000 (0:00:00.923) 0:00:23.064 ********* 2026-04-07 03:13:11.419816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:13:11.419850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:13:12.125363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:13:12.125617 | orchestrator | 2026-04-07 03:13:12.125656 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-07 03:13:12.125679 | orchestrator | Tuesday 07 April 2026 03:13:11 +0000 (0:00:02.245) 0:00:25.309 ********* 2026-04-07 03:13:12.125722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 03:13:12.125748 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:12.125772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 03:13:12.125785 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:13:12.125806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 03:13:14.889376 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:13:14.889454 | orchestrator | 2026-04-07 03:13:14.889464 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-07 03:13:14.889473 | orchestrator | Tuesday 07 April 2026 03:13:12 +0000 (0:00:00.712) 0:00:26.021 ********* 2026-04-07 03:13:14.889496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 03:13:14.889507 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:13:14.889581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 03:13:14.889605 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:13:14.889635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 03:13:14.889643 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:13:14.889650 | orchestrator | 2026-04-07 03:13:14.889656 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-07 03:13:14.889663 | orchestrator | Tuesday 07 April 2026 03:13:13 +0000 (0:00:00.903) 0:00:26.924 ********* 2026-04-07 03:13:14.889679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:14:00.152970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:14:00.153169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 03:14:00.153194 | orchestrator | 2026-04-07 03:14:00.153208 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 03:14:00.153222 | orchestrator | Tuesday 07 April 2026 03:13:14 +0000 (0:00:01.858) 0:00:28.783 ********* 2026-04-07 03:14:00.153233 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:14:00.153245 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:14:00.153256 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:14:00.153267 | orchestrator | 2026-04-07 03:14:00.153279 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-07 03:14:00.153290 | orchestrator | Tuesday 07 April 2026 03:13:15 +0000 (0:00:00.370) 0:00:29.154 ********* 2026-04-07 03:14:00.153302 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:14:00.153314 | orchestrator | 2026-04-07 03:14:00.153325 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-07 03:14:00.153337 | orchestrator | Tuesday 07 April 2026 03:13:15 +0000 (0:00:00.580) 0:00:29.734 ********* 2026-04-07 03:14:00.153348 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:14:00.153358 | orchestrator | 2026-04-07 03:14:00.153367 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-07 03:14:00.153433 | orchestrator | Tuesday 07 April 2026 03:13:18 +0000 (0:00:02.318) 0:00:32.052 ********* 2026-04-07 03:14:00.153445 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:14:00.153457 | orchestrator | 2026-04-07 03:14:00.153467 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-07 03:14:00.153478 | orchestrator | Tuesday 07 April 2026 03:13:20 +0000 (0:00:02.832) 0:00:34.885 ********* 2026-04-07 03:14:00.153490 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:14:00.153501 | orchestrator | 2026-04-07 03:14:00.153523 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-07 03:14:00.153535 | orchestrator | Tuesday 07 April 2026 03:13:38 +0000 (0:00:17.317) 0:00:52.202 ********* 2026-04-07 03:14:00.153546 | orchestrator | 2026-04-07 03:14:00.153557 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-07 03:14:00.153568 | orchestrator | Tuesday 07 April 2026 03:13:38 +0000 (0:00:00.072) 0:00:52.275 ********* 2026-04-07 03:14:00.153580 | orchestrator | 2026-04-07 03:14:00.153590 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-07 03:14:00.153600 | orchestrator | Tuesday 07 April 2026 03:13:38 +0000 (0:00:00.070) 0:00:52.345 ********* 2026-04-07 03:14:00.153610 | orchestrator | 2026-04-07 03:14:00.153620 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-07 03:14:00.153630 | orchestrator | Tuesday 07 April 2026 03:13:38 +0000 (0:00:00.080) 0:00:52.425 ********* 2026-04-07 03:14:00.153640 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:14:00.153651 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:14:00.153662 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:14:00.153673 | orchestrator | 2026-04-07 03:14:00.153684 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:14:00.153696 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 03:14:00.153708 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-07 03:14:00.153718 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-07 03:14:00.153729 | orchestrator | 2026-04-07 03:14:00.153739 | orchestrator | 2026-04-07 03:14:00.153748 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:14:00.153758 | orchestrator | Tuesday 07 April 2026 03:14:00 +0000 (0:00:21.611) 0:01:14.037 ********* 2026-04-07 03:14:00.153767 | orchestrator | =============================================================================== 2026-04-07 03:14:00.153777 | orchestrator | horizon : Restart horizon container ------------------------------------ 21.61s 2026-04-07 03:14:00.153787 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.32s 2026-04-07 03:14:00.153796 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.83s 2026-04-07 03:14:00.153806 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.32s 2026-04-07 03:14:00.153816 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.24s 2026-04-07 03:14:00.153835 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.90s 2026-04-07 03:14:00.153844 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.89s 2026-04-07 03:14:00.153853 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.86s 2026-04-07 03:14:00.153862 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.86s 2026-04-07 03:14:00.153873 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.63s 2026-04-07 03:14:00.153884 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.26s 2026-04-07 03:14:00.153893 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.92s 2026-04-07 03:14:00.153903 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.90s 2026-04-07 03:14:00.153923 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-04-07 03:14:00.634275 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-04-07 03:14:00.634355 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-04-07 03:14:00.634365 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-04-07 03:14:00.634427 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-04-07 03:14:00.634435 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-04-07 03:14:00.634441 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-04-07 03:14:03.213846 | orchestrator | 2026-04-07 03:14:03 | INFO  | Task daa9f246-36c1-47f0-b0a4-943b475d5406 (skyline) was prepared for execution. 2026-04-07 03:14:03.213926 | orchestrator | 2026-04-07 03:14:03 | INFO  | It takes a moment until task daa9f246-36c1-47f0-b0a4-943b475d5406 (skyline) has been started and output is visible here. 2026-04-07 03:14:35.662905 | orchestrator | 2026-04-07 03:14:35.663044 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:14:35.663064 | orchestrator | 2026-04-07 03:14:35.663076 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:14:35.663088 | orchestrator | Tuesday 07 April 2026 03:14:07 +0000 (0:00:00.290) 0:00:00.290 ********* 2026-04-07 03:14:35.663099 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:14:35.663112 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:14:35.663124 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:14:35.663135 | orchestrator | 2026-04-07 03:14:35.663146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:14:35.663157 | orchestrator | Tuesday 07 April 2026 03:14:08 +0000 (0:00:00.331) 0:00:00.622 ********* 2026-04-07 03:14:35.663169 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-07 03:14:35.663180 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-07 03:14:35.663191 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-07 03:14:35.663202 | orchestrator | 2026-04-07 03:14:35.663213 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-07 03:14:35.663224 | orchestrator | 2026-04-07 03:14:35.663234 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-07 03:14:35.663245 | orchestrator | Tuesday 07 April 2026 03:14:08 +0000 (0:00:00.496) 0:00:01.119 ********* 2026-04-07 03:14:35.663257 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:14:35.663269 | orchestrator | 2026-04-07 03:14:35.663320 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-04-07 03:14:35.663337 | orchestrator | Tuesday 07 April 2026 03:14:09 +0000 (0:00:00.604) 0:00:01.724 ********* 2026-04-07 03:14:35.663348 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-04-07 03:14:35.663359 | orchestrator | 2026-04-07 03:14:35.663370 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-04-07 03:14:35.663381 | orchestrator | Tuesday 07 April 2026 03:14:12 +0000 (0:00:03.548) 0:00:05.273 ********* 2026-04-07 03:14:35.663393 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-04-07 03:14:35.663404 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-04-07 03:14:35.663418 | orchestrator | 2026-04-07 03:14:35.663432 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-04-07 03:14:35.663444 | orchestrator | Tuesday 07 April 2026 03:14:19 +0000 (0:00:06.893) 0:00:12.166 ********* 2026-04-07 03:14:35.663457 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:14:35.663471 | orchestrator | 2026-04-07 03:14:35.663484 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-04-07 03:14:35.663497 | orchestrator | Tuesday 07 April 2026 03:14:22 +0000 (0:00:03.390) 0:00:15.557 ********* 2026-04-07 03:14:35.663509 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:14:35.663520 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-04-07 03:14:35.663531 | orchestrator | 2026-04-07 03:14:35.663542 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-04-07 03:14:35.663585 | orchestrator | Tuesday 07 April 2026 03:14:27 +0000 (0:00:04.086) 0:00:19.643 ********* 2026-04-07 03:14:35.663597 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:14:35.663608 | orchestrator | 2026-04-07 03:14:35.663619 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-04-07 03:14:35.663630 | orchestrator | Tuesday 07 April 2026 03:14:30 +0000 (0:00:03.306) 0:00:22.950 ********* 2026-04-07 03:14:35.663641 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-04-07 03:14:35.663652 | orchestrator | 2026-04-07 03:14:35.663677 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-07 03:14:35.663689 | orchestrator | Tuesday 07 April 2026 03:14:34 +0000 (0:00:03.901) 0:00:26.851 ********* 2026-04-07 03:14:35.663704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:35.663740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:35.663752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:35.663765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:35.663793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:35.663815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:39.726835 | orchestrator | 2026-04-07 03:14:39.726939 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-07 03:14:39.726956 | orchestrator | Tuesday 07 April 2026 03:14:35 +0000 (0:00:01.361) 0:00:28.212 ********* 2026-04-07 03:14:39.726969 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:14:39.726981 | orchestrator | 2026-04-07 03:14:39.726992 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-07 03:14:39.727003 | orchestrator | Tuesday 07 April 2026 03:14:36 +0000 (0:00:00.838) 0:00:29.050 ********* 2026-04-07 03:14:39.727017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:39.727057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:39.727084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:39.727115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:39.727159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:39.727173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:39.727193 | orchestrator | 2026-04-07 03:14:39.727204 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-07 03:14:39.727216 | orchestrator | Tuesday 07 April 2026 03:14:39 +0000 (0:00:02.551) 0:00:31.601 ********* 2026-04-07 03:14:39.727234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 03:14:39.727246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 03:14:39.727282 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:14:39.727305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106478 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:14:41.106496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106506 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:14:41.106510 | orchestrator | 2026-04-07 03:14:41.106515 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-07 03:14:41.106521 | orchestrator | Tuesday 07 April 2026 03:14:39 +0000 (0:00:00.682) 0:00:32.283 ********* 2026-04-07 03:14:41.106525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106547 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:14:41.106554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106562 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:14:41.106566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-07 03:14:41.106577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-07 03:14:49.753641 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:14:49.753716 | orchestrator | 2026-04-07 03:14:49.753725 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-07 03:14:49.753733 | orchestrator | Tuesday 07 April 2026 03:14:41 +0000 (0:00:01.368) 0:00:33.652 ********* 2026-04-07 03:14:49.753752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:49.753760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:49.753767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:49.753787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:49.753807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:49.753816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:49.753822 | orchestrator | 2026-04-07 03:14:49.753827 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-07 03:14:49.753833 | orchestrator | Tuesday 07 April 2026 03:14:43 +0000 (0:00:02.429) 0:00:36.082 ********* 2026-04-07 03:14:49.753838 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-07 03:14:49.753844 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-07 03:14:49.753849 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-07 03:14:49.753854 | orchestrator | 2026-04-07 03:14:49.753859 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-07 03:14:49.753865 | orchestrator | Tuesday 07 April 2026 03:14:45 +0000 (0:00:01.703) 0:00:37.785 ********* 2026-04-07 03:14:49.753870 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-07 03:14:49.753875 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-07 03:14:49.753885 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-07 03:14:49.753890 | orchestrator | 2026-04-07 03:14:49.753895 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-07 03:14:49.753901 | orchestrator | Tuesday 07 April 2026 03:14:47 +0000 (0:00:02.181) 0:00:39.967 ********* 2026-04-07 03:14:49.753906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:49.753917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.000989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.001095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.001133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.001145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.001156 | orchestrator | 2026-04-07 03:14:52.001168 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-07 03:14:52.001180 | orchestrator | Tuesday 07 April 2026 03:14:49 +0000 (0:00:02.342) 0:00:42.310 ********* 2026-04-07 03:14:52.001190 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:14:52.001201 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:14:52.001211 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:14:52.001221 | orchestrator | 2026-04-07 03:14:52.001330 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-04-07 03:14:52.001343 | orchestrator | Tuesday 07 April 2026 03:14:50 +0000 (0:00:00.374) 0:00:42.684 ********* 2026-04-07 03:14:52.001361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.001373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.001392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.001403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:14:52.001429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:15:27.207083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-07 03:15:27.207250 | orchestrator | 2026-04-07 03:15:27.207264 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-04-07 03:15:27.207270 | orchestrator | Tuesday 07 April 2026 03:14:51 +0000 (0:00:01.869) 0:00:44.553 ********* 2026-04-07 03:15:27.207275 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:15:27.207281 | orchestrator | 2026-04-07 03:15:27.207286 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-04-07 03:15:27.207290 | orchestrator | Tuesday 07 April 2026 03:14:54 +0000 (0:00:02.318) 0:00:46.872 ********* 2026-04-07 03:15:27.207295 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:15:27.207299 | orchestrator | 2026-04-07 03:15:27.207304 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-04-07 03:15:27.207308 | orchestrator | Tuesday 07 April 2026 03:14:56 +0000 (0:00:02.205) 0:00:49.078 ********* 2026-04-07 03:15:27.207313 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:15:27.207317 | orchestrator | 2026-04-07 03:15:27.207322 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-07 03:15:27.207327 | orchestrator | Tuesday 07 April 2026 03:15:05 +0000 (0:00:08.668) 0:00:57.747 ********* 2026-04-07 03:15:27.207332 | orchestrator | 2026-04-07 03:15:27.207336 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-07 03:15:27.207341 | orchestrator | Tuesday 07 April 2026 03:15:05 +0000 (0:00:00.081) 0:00:57.828 ********* 2026-04-07 03:15:27.207345 | orchestrator | 2026-04-07 03:15:27.207350 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-07 03:15:27.207355 | orchestrator | Tuesday 07 April 2026 03:15:05 +0000 (0:00:00.071) 0:00:57.900 ********* 2026-04-07 03:15:27.207359 | orchestrator | 2026-04-07 03:15:27.207364 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-07 03:15:27.207368 | orchestrator | Tuesday 07 April 2026 03:15:05 +0000 (0:00:00.073) 0:00:57.974 ********* 2026-04-07 03:15:27.207372 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:15:27.207377 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:15:27.207382 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:15:27.207386 | orchestrator | 2026-04-07 03:15:27.207391 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-07 03:15:27.207395 | orchestrator | Tuesday 07 April 2026 03:15:11 +0000 (0:00:06.542) 0:01:04.517 ********* 2026-04-07 03:15:27.207401 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:15:27.207408 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:15:27.207415 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:15:27.207421 | orchestrator | 2026-04-07 03:15:27.207432 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:15:27.207442 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 03:15:27.207454 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 03:15:27.207460 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 03:15:27.207466 | orchestrator | 2026-04-07 03:15:27.207473 | orchestrator | 2026-04-07 03:15:27.207479 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:15:27.207486 | orchestrator | Tuesday 07 April 2026 03:15:26 +0000 (0:00:14.843) 0:01:19.360 ********* 2026-04-07 03:15:27.207492 | orchestrator | =============================================================================== 2026-04-07 03:15:27.207505 | orchestrator | skyline : Restart skyline-console container ---------------------------- 14.84s 2026-04-07 03:15:27.207512 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 8.67s 2026-04-07 03:15:27.207519 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.89s 2026-04-07 03:15:27.207525 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.54s 2026-04-07 03:15:27.207546 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.09s 2026-04-07 03:15:27.207553 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.90s 2026-04-07 03:15:27.207560 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.55s 2026-04-07 03:15:27.207567 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.39s 2026-04-07 03:15:27.207589 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.31s 2026-04-07 03:15:27.207597 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.55s 2026-04-07 03:15:27.207604 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.43s 2026-04-07 03:15:27.207612 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.34s 2026-04-07 03:15:27.207618 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.32s 2026-04-07 03:15:27.207623 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.21s 2026-04-07 03:15:27.207627 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.18s 2026-04-07 03:15:27.207632 | orchestrator | skyline : Check skyline container --------------------------------------- 1.87s 2026-04-07 03:15:27.207636 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.70s 2026-04-07 03:15:27.207641 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.37s 2026-04-07 03:15:27.207645 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.36s 2026-04-07 03:15:27.207650 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.84s 2026-04-07 03:15:29.790449 | orchestrator | 2026-04-07 03:15:29 | INFO  | Task 0661f7bb-0ee3-48ae-a09b-682c9eeb5d54 (glance) was prepared for execution. 2026-04-07 03:15:29.790541 | orchestrator | 2026-04-07 03:15:29 | INFO  | It takes a moment until task 0661f7bb-0ee3-48ae-a09b-682c9eeb5d54 (glance) has been started and output is visible here. 2026-04-07 03:16:05.706524 | orchestrator | 2026-04-07 03:16:05.706621 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:16:05.706633 | orchestrator | 2026-04-07 03:16:05.706642 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:16:05.706650 | orchestrator | Tuesday 07 April 2026 03:15:34 +0000 (0:00:00.292) 0:00:00.293 ********* 2026-04-07 03:16:05.706658 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:16:05.706667 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:16:05.706674 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:16:05.706682 | orchestrator | 2026-04-07 03:16:05.706689 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:16:05.706697 | orchestrator | Tuesday 07 April 2026 03:15:34 +0000 (0:00:00.334) 0:00:00.627 ********* 2026-04-07 03:16:05.706704 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-07 03:16:05.706712 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-07 03:16:05.706719 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-07 03:16:05.706727 | orchestrator | 2026-04-07 03:16:05.706734 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-07 03:16:05.706742 | orchestrator | 2026-04-07 03:16:05.706749 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-07 03:16:05.706757 | orchestrator | Tuesday 07 April 2026 03:15:35 +0000 (0:00:00.523) 0:00:01.151 ********* 2026-04-07 03:16:05.706782 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:16:05.706791 | orchestrator | 2026-04-07 03:16:05.706798 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-07 03:16:05.706806 | orchestrator | Tuesday 07 April 2026 03:15:35 +0000 (0:00:00.602) 0:00:01.754 ********* 2026-04-07 03:16:05.706813 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-07 03:16:05.706821 | orchestrator | 2026-04-07 03:16:05.706828 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-07 03:16:05.706835 | orchestrator | Tuesday 07 April 2026 03:15:39 +0000 (0:00:03.673) 0:00:05.428 ********* 2026-04-07 03:16:05.706843 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-07 03:16:05.706850 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-07 03:16:05.706857 | orchestrator | 2026-04-07 03:16:05.706865 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-07 03:16:05.706872 | orchestrator | Tuesday 07 April 2026 03:15:46 +0000 (0:00:06.803) 0:00:12.231 ********* 2026-04-07 03:16:05.706880 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:16:05.706888 | orchestrator | 2026-04-07 03:16:05.706896 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-07 03:16:05.706903 | orchestrator | Tuesday 07 April 2026 03:15:49 +0000 (0:00:03.519) 0:00:15.750 ********* 2026-04-07 03:16:05.706911 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:16:05.706918 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-07 03:16:05.706925 | orchestrator | 2026-04-07 03:16:05.706933 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-07 03:16:05.706940 | orchestrator | Tuesday 07 April 2026 03:15:53 +0000 (0:00:04.246) 0:00:19.996 ********* 2026-04-07 03:16:05.706947 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:16:05.706955 | orchestrator | 2026-04-07 03:16:05.706962 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-07 03:16:05.706969 | orchestrator | Tuesday 07 April 2026 03:15:57 +0000 (0:00:03.467) 0:00:23.463 ********* 2026-04-07 03:16:05.706989 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-07 03:16:05.706997 | orchestrator | 2026-04-07 03:16:05.707004 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-07 03:16:05.707011 | orchestrator | Tuesday 07 April 2026 03:16:01 +0000 (0:00:03.891) 0:00:27.354 ********* 2026-04-07 03:16:05.707075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:16:05.707097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:16:05.707110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:16:05.707118 | orchestrator | 2026-04-07 03:16:05.707126 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-07 03:16:05.707133 | orchestrator | Tuesday 07 April 2026 03:16:04 +0000 (0:00:03.554) 0:00:30.909 ********* 2026-04-07 03:16:05.707142 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:16:05.707154 | orchestrator | 2026-04-07 03:16:05.707170 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-07 03:16:21.901829 | orchestrator | Tuesday 07 April 2026 03:16:05 +0000 (0:00:00.790) 0:00:31.700 ********* 2026-04-07 03:16:21.901949 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:16:21.901966 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:16:21.901978 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:16:21.901990 | orchestrator | 2026-04-07 03:16:21.902091 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-07 03:16:21.902105 | orchestrator | Tuesday 07 April 2026 03:16:09 +0000 (0:00:03.765) 0:00:35.466 ********* 2026-04-07 03:16:21.902118 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:16:21.902130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:16:21.902142 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:16:21.902153 | orchestrator | 2026-04-07 03:16:21.902164 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-07 03:16:21.902175 | orchestrator | Tuesday 07 April 2026 03:16:11 +0000 (0:00:01.620) 0:00:37.086 ********* 2026-04-07 03:16:21.902185 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:16:21.902197 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:16:21.902208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:16:21.902219 | orchestrator | 2026-04-07 03:16:21.902230 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-07 03:16:21.902241 | orchestrator | Tuesday 07 April 2026 03:16:12 +0000 (0:00:01.487) 0:00:38.573 ********* 2026-04-07 03:16:21.902252 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:16:21.902263 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:16:21.902274 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:16:21.902285 | orchestrator | 2026-04-07 03:16:21.902296 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-07 03:16:21.902307 | orchestrator | Tuesday 07 April 2026 03:16:13 +0000 (0:00:00.781) 0:00:39.355 ********* 2026-04-07 03:16:21.902318 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:16:21.902329 | orchestrator | 2026-04-07 03:16:21.902341 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-07 03:16:21.902352 | orchestrator | Tuesday 07 April 2026 03:16:13 +0000 (0:00:00.151) 0:00:39.506 ********* 2026-04-07 03:16:21.902363 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:16:21.902374 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:16:21.902385 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:16:21.902396 | orchestrator | 2026-04-07 03:16:21.902407 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-07 03:16:21.902418 | orchestrator | Tuesday 07 April 2026 03:16:13 +0000 (0:00:00.300) 0:00:39.806 ********* 2026-04-07 03:16:21.902429 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:16:21.902440 | orchestrator | 2026-04-07 03:16:21.902451 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-07 03:16:21.902462 | orchestrator | Tuesday 07 April 2026 03:16:14 +0000 (0:00:00.844) 0:00:40.651 ********* 2026-04-07 03:16:21.902495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:16:21.902569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:16:21.902590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:16:21.902610 | orchestrator | 2026-04-07 03:16:21.902621 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-07 03:16:21.902632 | orchestrator | Tuesday 07 April 2026 03:16:18 +0000 (0:00:04.012) 0:00:44.664 ********* 2026-04-07 03:16:21.902653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 03:16:25.815521 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:16:25.815620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 03:16:25.815644 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:16:25.815649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 03:16:25.815653 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:16:25.815657 | orchestrator | 2026-04-07 03:16:25.815662 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-07 03:16:25.815667 | orchestrator | Tuesday 07 April 2026 03:16:21 +0000 (0:00:03.235) 0:00:47.899 ********* 2026-04-07 03:16:25.815681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 03:16:25.815690 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:16:25.815697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 03:16:25.815701 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:16:25.815709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 03:17:03.314493 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:17:03.314607 | orchestrator | 2026-04-07 03:17:03.314625 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-07 03:17:03.314640 | orchestrator | Tuesday 07 April 2026 03:16:25 +0000 (0:00:03.908) 0:00:51.808 ********* 2026-04-07 03:17:03.314651 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:17:03.314685 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:17:03.314697 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:17:03.314708 | orchestrator | 2026-04-07 03:17:03.314719 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-07 03:17:03.314730 | orchestrator | Tuesday 07 April 2026 03:16:29 +0000 (0:00:03.453) 0:00:55.261 ********* 2026-04-07 03:17:03.314760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:17:03.314778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:17:03.314816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:17:03.314842 | orchestrator | 2026-04-07 03:17:03.314854 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-07 03:17:03.314866 | orchestrator | Tuesday 07 April 2026 03:16:33 +0000 (0:00:04.129) 0:00:59.390 ********* 2026-04-07 03:17:03.314877 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:17:03.314888 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:17:03.314899 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:17:03.314939 | orchestrator | 2026-04-07 03:17:03.314951 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-07 03:17:03.314962 | orchestrator | Tuesday 07 April 2026 03:16:39 +0000 (0:00:05.837) 0:01:05.227 ********* 2026-04-07 03:17:03.314973 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:17:03.314984 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:17:03.314995 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:17:03.315009 | orchestrator | 2026-04-07 03:17:03.315022 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-07 03:17:03.315036 | orchestrator | Tuesday 07 April 2026 03:16:42 +0000 (0:00:03.754) 0:01:08.982 ********* 2026-04-07 03:17:03.315049 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:17:03.315061 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:17:03.315075 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:17:03.315088 | orchestrator | 2026-04-07 03:17:03.315101 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-07 03:17:03.315114 | orchestrator | Tuesday 07 April 2026 03:16:46 +0000 (0:00:03.686) 0:01:12.668 ********* 2026-04-07 03:17:03.315127 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:17:03.315140 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:17:03.315153 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:17:03.315166 | orchestrator | 2026-04-07 03:17:03.315178 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-07 03:17:03.315191 | orchestrator | Tuesday 07 April 2026 03:16:50 +0000 (0:00:03.679) 0:01:16.348 ********* 2026-04-07 03:17:03.315205 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:17:03.315218 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:17:03.315230 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:17:03.315243 | orchestrator | 2026-04-07 03:17:03.315256 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-07 03:17:03.315269 | orchestrator | Tuesday 07 April 2026 03:16:54 +0000 (0:00:03.862) 0:01:20.211 ********* 2026-04-07 03:17:03.315283 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:17:03.315293 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:17:03.315312 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:17:03.315329 | orchestrator | 2026-04-07 03:17:03.315347 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-07 03:17:03.315365 | orchestrator | Tuesday 07 April 2026 03:16:54 +0000 (0:00:00.621) 0:01:20.832 ********* 2026-04-07 03:17:03.315382 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-07 03:17:03.315401 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:17:03.315418 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-07 03:17:03.315435 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:17:03.315450 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-07 03:17:03.315469 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:17:03.315486 | orchestrator | 2026-04-07 03:17:03.315505 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-07 03:17:03.315522 | orchestrator | Tuesday 07 April 2026 03:16:58 +0000 (0:00:03.530) 0:01:24.362 ********* 2026-04-07 03:17:03.315541 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:17:03.315559 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:17:03.315577 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:17:03.315588 | orchestrator | 2026-04-07 03:17:03.315599 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-07 03:17:03.315621 | orchestrator | Tuesday 07 April 2026 03:17:03 +0000 (0:00:04.945) 0:01:29.308 ********* 2026-04-07 03:18:27.008177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:18:27.008509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:18:27.008664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 03:18:27.008697 | orchestrator | 2026-04-07 03:18:27.008720 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-07 03:18:27.008778 | orchestrator | Tuesday 07 April 2026 03:17:07 +0000 (0:00:03.937) 0:01:33.245 ********* 2026-04-07 03:18:27.008802 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:18:27.008826 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:18:27.008849 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:18:27.008871 | orchestrator | 2026-04-07 03:18:27.008894 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-07 03:18:27.008917 | orchestrator | Tuesday 07 April 2026 03:17:07 +0000 (0:00:00.545) 0:01:33.791 ********* 2026-04-07 03:18:27.008939 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:18:27.008962 | orchestrator | 2026-04-07 03:18:27.008983 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-07 03:18:27.009005 | orchestrator | Tuesday 07 April 2026 03:17:10 +0000 (0:00:02.319) 0:01:36.110 ********* 2026-04-07 03:18:27.009026 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:18:27.009047 | orchestrator | 2026-04-07 03:18:27.009068 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-07 03:18:27.009088 | orchestrator | Tuesday 07 April 2026 03:17:12 +0000 (0:00:02.385) 0:01:38.495 ********* 2026-04-07 03:18:27.009108 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:18:27.009139 | orchestrator | 2026-04-07 03:18:27.009160 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-07 03:18:27.009180 | orchestrator | Tuesday 07 April 2026 03:17:14 +0000 (0:00:02.196) 0:01:40.692 ********* 2026-04-07 03:18:27.009201 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:18:27.009220 | orchestrator | 2026-04-07 03:18:27.009239 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-07 03:18:27.009256 | orchestrator | Tuesday 07 April 2026 03:17:45 +0000 (0:00:30.784) 0:02:11.477 ********* 2026-04-07 03:18:27.009274 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:18:27.009290 | orchestrator | 2026-04-07 03:18:27.009308 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-07 03:18:27.009326 | orchestrator | Tuesday 07 April 2026 03:17:47 +0000 (0:00:02.190) 0:02:13.667 ********* 2026-04-07 03:18:27.009343 | orchestrator | 2026-04-07 03:18:27.009360 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-07 03:18:27.009377 | orchestrator | Tuesday 07 April 2026 03:17:47 +0000 (0:00:00.073) 0:02:13.741 ********* 2026-04-07 03:18:27.009395 | orchestrator | 2026-04-07 03:18:27.009412 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-07 03:18:27.009429 | orchestrator | Tuesday 07 April 2026 03:17:47 +0000 (0:00:00.072) 0:02:13.813 ********* 2026-04-07 03:18:27.009446 | orchestrator | 2026-04-07 03:18:27.009464 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-07 03:18:27.009482 | orchestrator | Tuesday 07 April 2026 03:17:47 +0000 (0:00:00.070) 0:02:13.884 ********* 2026-04-07 03:18:27.009501 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:18:27.009519 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:18:27.009536 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:18:27.009553 | orchestrator | 2026-04-07 03:18:27.009570 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:18:27.009590 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 03:18:27.009611 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 03:18:27.009632 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 03:18:27.009652 | orchestrator | 2026-04-07 03:18:27.009671 | orchestrator | 2026-04-07 03:18:27.009692 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:18:27.009710 | orchestrator | Tuesday 07 April 2026 03:18:26 +0000 (0:00:39.112) 0:02:52.996 ********* 2026-04-07 03:18:27.009726 | orchestrator | =============================================================================== 2026-04-07 03:18:27.009773 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.11s 2026-04-07 03:18:27.009790 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.78s 2026-04-07 03:18:27.009808 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.80s 2026-04-07 03:18:27.009849 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.84s 2026-04-07 03:18:27.403181 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.95s 2026-04-07 03:18:27.403274 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.25s 2026-04-07 03:18:27.403289 | orchestrator | glance : Copying over config.json files for services -------------------- 4.13s 2026-04-07 03:18:27.403301 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.01s 2026-04-07 03:18:27.403313 | orchestrator | glance : Check glance containers ---------------------------------------- 3.94s 2026-04-07 03:18:27.403324 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.91s 2026-04-07 03:18:27.403356 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.89s 2026-04-07 03:18:27.403384 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.86s 2026-04-07 03:18:27.403391 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.77s 2026-04-07 03:18:27.403399 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.75s 2026-04-07 03:18:27.403406 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.69s 2026-04-07 03:18:27.403413 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.68s 2026-04-07 03:18:27.403421 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.67s 2026-04-07 03:18:27.403428 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.55s 2026-04-07 03:18:27.403435 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.53s 2026-04-07 03:18:27.403442 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.52s 2026-04-07 03:18:30.002446 | orchestrator | 2026-04-07 03:18:30 | INFO  | Task eb60d693-74d3-4067-87ab-8b4e7fdd22b4 (cinder) was prepared for execution. 2026-04-07 03:18:30.002558 | orchestrator | 2026-04-07 03:18:30 | INFO  | It takes a moment until task eb60d693-74d3-4067-87ab-8b4e7fdd22b4 (cinder) has been started and output is visible here. 2026-04-07 03:19:07.693702 | orchestrator | 2026-04-07 03:19:07.693840 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:19:07.693851 | orchestrator | 2026-04-07 03:19:07.693856 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:19:07.693861 | orchestrator | Tuesday 07 April 2026 03:18:34 +0000 (0:00:00.285) 0:00:00.285 ********* 2026-04-07 03:19:07.693865 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:19:07.693871 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:19:07.693875 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:19:07.693879 | orchestrator | 2026-04-07 03:19:07.693883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:19:07.693887 | orchestrator | Tuesday 07 April 2026 03:18:34 +0000 (0:00:00.346) 0:00:00.631 ********* 2026-04-07 03:19:07.693891 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-07 03:19:07.693896 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-07 03:19:07.693900 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-07 03:19:07.693904 | orchestrator | 2026-04-07 03:19:07.693908 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-07 03:19:07.693912 | orchestrator | 2026-04-07 03:19:07.693916 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 03:19:07.693920 | orchestrator | Tuesday 07 April 2026 03:18:35 +0000 (0:00:00.527) 0:00:01.159 ********* 2026-04-07 03:19:07.693923 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:19:07.693928 | orchestrator | 2026-04-07 03:19:07.693932 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-07 03:19:07.693936 | orchestrator | Tuesday 07 April 2026 03:18:36 +0000 (0:00:00.594) 0:00:01.754 ********* 2026-04-07 03:19:07.693940 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-07 03:19:07.693944 | orchestrator | 2026-04-07 03:19:07.693948 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-07 03:19:07.693952 | orchestrator | Tuesday 07 April 2026 03:18:39 +0000 (0:00:03.795) 0:00:05.549 ********* 2026-04-07 03:19:07.693957 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-07 03:19:07.693962 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-07 03:19:07.693966 | orchestrator | 2026-04-07 03:19:07.693970 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-07 03:19:07.693992 | orchestrator | Tuesday 07 April 2026 03:18:46 +0000 (0:00:07.052) 0:00:12.602 ********* 2026-04-07 03:19:07.693996 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:19:07.694000 | orchestrator | 2026-04-07 03:19:07.694004 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-07 03:19:07.694008 | orchestrator | Tuesday 07 April 2026 03:18:50 +0000 (0:00:03.381) 0:00:15.983 ********* 2026-04-07 03:19:07.694012 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:19:07.694051 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-07 03:19:07.694056 | orchestrator | 2026-04-07 03:19:07.694060 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-07 03:19:07.694063 | orchestrator | Tuesday 07 April 2026 03:18:54 +0000 (0:00:04.222) 0:00:20.205 ********* 2026-04-07 03:19:07.694067 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:19:07.694071 | orchestrator | 2026-04-07 03:19:07.694075 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-07 03:19:07.694079 | orchestrator | Tuesday 07 April 2026 03:18:58 +0000 (0:00:03.435) 0:00:23.641 ********* 2026-04-07 03:19:07.694083 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-07 03:19:07.694087 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-07 03:19:07.694090 | orchestrator | 2026-04-07 03:19:07.694094 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-07 03:19:07.694098 | orchestrator | Tuesday 07 April 2026 03:19:05 +0000 (0:00:07.692) 0:00:31.334 ********* 2026-04-07 03:19:07.694134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:07.694156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:07.694160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:07.694170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:07.694176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:07.694183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:07.694188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:07.694197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:13.795104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:13.795234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:13.795250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:13.795272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:13.795282 | orchestrator | 2026-04-07 03:19:13.795294 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 03:19:13.795305 | orchestrator | Tuesday 07 April 2026 03:19:07 +0000 (0:00:02.113) 0:00:33.448 ********* 2026-04-07 03:19:13.795314 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:19:13.795326 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:19:13.795335 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:19:13.795345 | orchestrator | 2026-04-07 03:19:13.795355 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 03:19:13.795385 | orchestrator | Tuesday 07 April 2026 03:19:08 +0000 (0:00:00.553) 0:00:34.001 ********* 2026-04-07 03:19:13.795407 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:19:13.795417 | orchestrator | 2026-04-07 03:19:13.795427 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-07 03:19:13.795436 | orchestrator | Tuesday 07 April 2026 03:19:09 +0000 (0:00:00.642) 0:00:34.643 ********* 2026-04-07 03:19:13.795447 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-07 03:19:13.795457 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-07 03:19:13.795466 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-07 03:19:13.795476 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-07 03:19:13.795495 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-07 03:19:13.795504 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-07 03:19:13.795514 | orchestrator | 2026-04-07 03:19:13.795522 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-07 03:19:13.795530 | orchestrator | Tuesday 07 April 2026 03:19:10 +0000 (0:00:01.632) 0:00:36.275 ********* 2026-04-07 03:19:13.795558 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 03:19:13.795569 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 03:19:13.795586 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 03:19:13.795596 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 03:19:13.795611 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 03:19:25.114771 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-07 03:19:25.114860 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 03:19:25.114887 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 03:19:25.114896 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 03:19:25.114905 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 03:19:25.114949 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 03:19:25.114958 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-07 03:19:25.114965 | orchestrator | 2026-04-07 03:19:25.114974 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-07 03:19:25.114982 | orchestrator | Tuesday 07 April 2026 03:19:14 +0000 (0:00:03.614) 0:00:39.890 ********* 2026-04-07 03:19:25.114989 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:19:25.114997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:19:25.115005 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-07 03:19:25.115011 | orchestrator | 2026-04-07 03:19:25.115018 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-07 03:19:25.115025 | orchestrator | Tuesday 07 April 2026 03:19:15 +0000 (0:00:01.570) 0:00:41.461 ********* 2026-04-07 03:19:25.115033 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-07 03:19:25.115040 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-07 03:19:25.115047 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-07 03:19:25.115054 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 03:19:25.115061 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 03:19:25.115073 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-07 03:19:25.115080 | orchestrator | 2026-04-07 03:19:25.115087 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-07 03:19:25.115094 | orchestrator | Tuesday 07 April 2026 03:19:18 +0000 (0:00:02.747) 0:00:44.208 ********* 2026-04-07 03:19:25.115101 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-07 03:19:25.115109 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-07 03:19:25.115122 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-07 03:19:25.115129 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-07 03:19:25.115136 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-07 03:19:25.115143 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-07 03:19:25.115150 | orchestrator | 2026-04-07 03:19:25.115157 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-07 03:19:25.115164 | orchestrator | Tuesday 07 April 2026 03:19:19 +0000 (0:00:01.006) 0:00:45.214 ********* 2026-04-07 03:19:25.115171 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:19:25.115178 | orchestrator | 2026-04-07 03:19:25.115185 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-07 03:19:25.115192 | orchestrator | Tuesday 07 April 2026 03:19:19 +0000 (0:00:00.192) 0:00:45.407 ********* 2026-04-07 03:19:25.115199 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:19:25.115206 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:19:25.115213 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:19:25.115220 | orchestrator | 2026-04-07 03:19:25.115227 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 03:19:25.115234 | orchestrator | Tuesday 07 April 2026 03:19:20 +0000 (0:00:00.569) 0:00:45.977 ********* 2026-04-07 03:19:25.115241 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:19:25.115248 | orchestrator | 2026-04-07 03:19:25.115255 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-07 03:19:25.115262 | orchestrator | Tuesday 07 April 2026 03:19:21 +0000 (0:00:00.670) 0:00:46.647 ********* 2026-04-07 03:19:25.115275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:26.193242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:26.193348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:26.193379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:26.193490 | orchestrator | 2026-04-07 03:19:26.193497 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-07 03:19:26.193505 | orchestrator | Tuesday 07 April 2026 03:19:25 +0000 (0:00:04.185) 0:00:50.833 ********* 2026-04-07 03:19:26.193516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:26.295574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.295831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.295852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.295865 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:19:26.295879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:26.295892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.295926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.295963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.295976 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:19:26.295987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:26.296000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.296011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.296023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.296041 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:19:26.296052 | orchestrator | 2026-04-07 03:19:26.296064 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-07 03:19:26.296084 | orchestrator | Tuesday 07 April 2026 03:19:26 +0000 (0:00:01.085) 0:00:51.918 ********* 2026-04-07 03:19:26.902083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:26.902177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.902190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.902201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.902210 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:19:26.902221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:26.902264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.902279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.902287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.902296 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:19:26.902304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:26.902312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:26.902333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:31.414307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:31.414429 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:19:31.414452 | orchestrator | 2026-04-07 03:19:31.414498 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-07 03:19:31.414524 | orchestrator | Tuesday 07 April 2026 03:19:27 +0000 (0:00:00.937) 0:00:52.855 ********* 2026-04-07 03:19:31.414545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:31.414566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:31.414587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:31.414766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:31.414796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:31.414830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:31.414851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:31.414870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:31.414890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:31.414939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:45.561770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:45.561883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:45.561904 | orchestrator | 2026-04-07 03:19:45.561919 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-07 03:19:45.561933 | orchestrator | Tuesday 07 April 2026 03:19:31 +0000 (0:00:04.276) 0:00:57.132 ********* 2026-04-07 03:19:45.561944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-07 03:19:45.561958 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-07 03:19:45.561970 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-07 03:19:45.561982 | orchestrator | 2026-04-07 03:19:45.561993 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-07 03:19:45.562004 | orchestrator | Tuesday 07 April 2026 03:19:33 +0000 (0:00:01.917) 0:00:59.050 ********* 2026-04-07 03:19:45.562069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:45.562115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:45.562152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:45.562162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:45.562170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:45.562178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:45.562192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:45.562201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:45.562216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:48.285988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:48.286157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:48.286174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:48.286208 | orchestrator | 2026-04-07 03:19:48.286221 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-07 03:19:48.286238 | orchestrator | Tuesday 07 April 2026 03:19:45 +0000 (0:00:12.224) 0:01:11.274 ********* 2026-04-07 03:19:48.286255 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:19:48.286273 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:19:48.286289 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:19:48.286306 | orchestrator | 2026-04-07 03:19:48.286325 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-07 03:19:48.286341 | orchestrator | Tuesday 07 April 2026 03:19:47 +0000 (0:00:01.699) 0:01:12.974 ********* 2026-04-07 03:19:48.286359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:48.286377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:48.286429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:48.286450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:48.286481 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:19:48.286498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:48.286514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:48.286525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:48.286551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:52.137167 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:19:52.137301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-07 03:19:52.137345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:19:52.137360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 03:19:52.137373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 03:19:52.137384 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:19:52.137397 | orchestrator | 2026-04-07 03:19:52.137409 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-07 03:19:52.137422 | orchestrator | Tuesday 07 April 2026 03:19:48 +0000 (0:00:01.031) 0:01:14.005 ********* 2026-04-07 03:19:52.137433 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:19:52.137444 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:19:52.137454 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:19:52.137465 | orchestrator | 2026-04-07 03:19:52.137476 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-07 03:19:52.137487 | orchestrator | Tuesday 07 April 2026 03:19:49 +0000 (0:00:00.626) 0:01:14.632 ********* 2026-04-07 03:19:52.137530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:52.137553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:52.137565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-07 03:19:52.137610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:52.137622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:52.137639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:19:52.137661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:21:31.439457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:21:31.439561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-07 03:21:31.439573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:21:31.439582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:21:31.439604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-07 03:21:31.439631 | orchestrator | 2026-04-07 03:21:31.439641 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-07 03:21:31.439651 | orchestrator | Tuesday 07 April 2026 03:19:52 +0000 (0:00:03.227) 0:01:17.859 ********* 2026-04-07 03:21:31.439660 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:21:31.439668 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:21:31.439676 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:21:31.439684 | orchestrator | 2026-04-07 03:21:31.439692 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-07 03:21:31.439700 | orchestrator | Tuesday 07 April 2026 03:19:52 +0000 (0:00:00.348) 0:01:18.208 ********* 2026-04-07 03:21:31.439709 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:21:31.439716 | orchestrator | 2026-04-07 03:21:31.439739 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-07 03:21:31.439748 | orchestrator | Tuesday 07 April 2026 03:19:54 +0000 (0:00:02.312) 0:01:20.521 ********* 2026-04-07 03:21:31.439756 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:21:31.439764 | orchestrator | 2026-04-07 03:21:31.439772 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-07 03:21:31.439780 | orchestrator | Tuesday 07 April 2026 03:19:57 +0000 (0:00:02.384) 0:01:22.906 ********* 2026-04-07 03:21:31.439788 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:21:31.439796 | orchestrator | 2026-04-07 03:21:31.439803 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-07 03:21:31.439811 | orchestrator | Tuesday 07 April 2026 03:20:18 +0000 (0:00:20.968) 0:01:43.874 ********* 2026-04-07 03:21:31.439819 | orchestrator | 2026-04-07 03:21:31.439827 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-07 03:21:31.439835 | orchestrator | Tuesday 07 April 2026 03:20:18 +0000 (0:00:00.076) 0:01:43.950 ********* 2026-04-07 03:21:31.439843 | orchestrator | 2026-04-07 03:21:31.439851 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-07 03:21:31.439859 | orchestrator | Tuesday 07 April 2026 03:20:18 +0000 (0:00:00.071) 0:01:44.022 ********* 2026-04-07 03:21:31.439866 | orchestrator | 2026-04-07 03:21:31.439874 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-07 03:21:31.439882 | orchestrator | Tuesday 07 April 2026 03:20:18 +0000 (0:00:00.081) 0:01:44.103 ********* 2026-04-07 03:21:31.439890 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:21:31.439898 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:21:31.439906 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:21:31.439914 | orchestrator | 2026-04-07 03:21:31.439922 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-07 03:21:31.439930 | orchestrator | Tuesday 07 April 2026 03:20:48 +0000 (0:00:30.140) 0:02:14.243 ********* 2026-04-07 03:21:31.439938 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:21:31.439946 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:21:31.439955 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:21:31.439965 | orchestrator | 2026-04-07 03:21:31.439974 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-07 03:21:31.439983 | orchestrator | Tuesday 07 April 2026 03:20:53 +0000 (0:00:05.211) 0:02:19.454 ********* 2026-04-07 03:21:31.439993 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:21:31.440002 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:21:31.440011 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:21:31.440020 | orchestrator | 2026-04-07 03:21:31.440029 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-07 03:21:31.440038 | orchestrator | Tuesday 07 April 2026 03:21:19 +0000 (0:00:26.165) 0:02:45.620 ********* 2026-04-07 03:21:31.440048 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:21:31.440057 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:21:31.440067 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:21:31.440083 | orchestrator | 2026-04-07 03:21:31.440093 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-07 03:21:31.440103 | orchestrator | Tuesday 07 April 2026 03:21:31 +0000 (0:00:11.114) 0:02:56.734 ********* 2026-04-07 03:21:31.440112 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:21:31.440121 | orchestrator | 2026-04-07 03:21:31.440129 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:21:31.440139 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-07 03:21:31.440151 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 03:21:31.440161 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 03:21:31.440171 | orchestrator | 2026-04-07 03:21:31.440180 | orchestrator | 2026-04-07 03:21:31.440189 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:21:31.440199 | orchestrator | Tuesday 07 April 2026 03:21:31 +0000 (0:00:00.309) 0:02:57.044 ********* 2026-04-07 03:21:31.440209 | orchestrator | =============================================================================== 2026-04-07 03:21:31.440219 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.14s 2026-04-07 03:21:31.440228 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.17s 2026-04-07 03:21:31.440237 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.97s 2026-04-07 03:21:31.440247 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.22s 2026-04-07 03:21:31.440260 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.11s 2026-04-07 03:21:31.440270 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.69s 2026-04-07 03:21:31.440279 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.05s 2026-04-07 03:21:31.440289 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.21s 2026-04-07 03:21:31.440298 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.28s 2026-04-07 03:21:31.440307 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.22s 2026-04-07 03:21:31.440315 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.19s 2026-04-07 03:21:31.440322 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.80s 2026-04-07 03:21:31.440330 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.61s 2026-04-07 03:21:31.440338 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.44s 2026-04-07 03:21:31.440352 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.38s 2026-04-07 03:21:31.893605 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.23s 2026-04-07 03:21:31.893701 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.75s 2026-04-07 03:21:31.893714 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.38s 2026-04-07 03:21:31.893723 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.31s 2026-04-07 03:21:31.893731 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.11s 2026-04-07 03:21:34.608725 | orchestrator | 2026-04-07 03:21:34 | INFO  | Task 5ae47fbb-7d4c-4a50-a57b-98a7b02a1b1a (barbican) was prepared for execution. 2026-04-07 03:21:34.608818 | orchestrator | 2026-04-07 03:21:34 | INFO  | It takes a moment until task 5ae47fbb-7d4c-4a50-a57b-98a7b02a1b1a (barbican) has been started and output is visible here. 2026-04-07 03:22:21.364279 | orchestrator | 2026-04-07 03:22:21.364403 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:22:21.364424 | orchestrator | 2026-04-07 03:22:21.364429 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:22:21.364434 | orchestrator | Tuesday 07 April 2026 03:21:39 +0000 (0:00:00.287) 0:00:00.287 ********* 2026-04-07 03:22:21.364437 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:22:21.364443 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:22:21.364447 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:22:21.364451 | orchestrator | 2026-04-07 03:22:21.364455 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:22:21.364459 | orchestrator | Tuesday 07 April 2026 03:21:39 +0000 (0:00:00.324) 0:00:00.612 ********* 2026-04-07 03:22:21.364463 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-07 03:22:21.364467 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-07 03:22:21.364471 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-07 03:22:21.364475 | orchestrator | 2026-04-07 03:22:21.364478 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-07 03:22:21.364482 | orchestrator | 2026-04-07 03:22:21.364486 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-07 03:22:21.364490 | orchestrator | Tuesday 07 April 2026 03:21:40 +0000 (0:00:00.484) 0:00:01.096 ********* 2026-04-07 03:22:21.364495 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:22:21.364500 | orchestrator | 2026-04-07 03:22:21.364503 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-07 03:22:21.364507 | orchestrator | Tuesday 07 April 2026 03:21:40 +0000 (0:00:00.576) 0:00:01.673 ********* 2026-04-07 03:22:21.364511 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-07 03:22:21.364515 | orchestrator | 2026-04-07 03:22:21.364519 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-07 03:22:21.364523 | orchestrator | Tuesday 07 April 2026 03:21:44 +0000 (0:00:03.692) 0:00:05.366 ********* 2026-04-07 03:22:21.364526 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-07 03:22:21.364530 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-07 03:22:21.364534 | orchestrator | 2026-04-07 03:22:21.364538 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-07 03:22:21.364541 | orchestrator | Tuesday 07 April 2026 03:21:51 +0000 (0:00:06.874) 0:00:12.240 ********* 2026-04-07 03:22:21.364545 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:22:21.364549 | orchestrator | 2026-04-07 03:22:21.364553 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-07 03:22:21.364557 | orchestrator | Tuesday 07 April 2026 03:21:54 +0000 (0:00:03.517) 0:00:15.757 ********* 2026-04-07 03:22:21.364561 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:22:21.364565 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-07 03:22:21.364568 | orchestrator | 2026-04-07 03:22:21.364572 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-07 03:22:21.364576 | orchestrator | Tuesday 07 April 2026 03:21:59 +0000 (0:00:04.394) 0:00:20.151 ********* 2026-04-07 03:22:21.364580 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:22:21.364583 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-07 03:22:21.364587 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-07 03:22:21.364600 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-07 03:22:21.364604 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-07 03:22:21.364607 | orchestrator | 2026-04-07 03:22:21.364611 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-07 03:22:21.364616 | orchestrator | Tuesday 07 April 2026 03:22:15 +0000 (0:00:16.431) 0:00:36.583 ********* 2026-04-07 03:22:21.364627 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-07 03:22:21.364634 | orchestrator | 2026-04-07 03:22:21.364640 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-07 03:22:21.364646 | orchestrator | Tuesday 07 April 2026 03:22:19 +0000 (0:00:04.046) 0:00:40.629 ********* 2026-04-07 03:22:21.364655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:21.364677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:21.364693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:21.364700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:21.364718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:21.364730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:21.364740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:27.606810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:27.606920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:27.606938 | orchestrator | 2026-04-07 03:22:27.606952 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-07 03:22:27.606965 | orchestrator | Tuesday 07 April 2026 03:22:21 +0000 (0:00:01.706) 0:00:42.336 ********* 2026-04-07 03:22:27.606977 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-07 03:22:27.606988 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-07 03:22:27.606999 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-07 03:22:27.607018 | orchestrator | 2026-04-07 03:22:27.607038 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-07 03:22:27.607070 | orchestrator | Tuesday 07 April 2026 03:22:22 +0000 (0:00:01.157) 0:00:43.493 ********* 2026-04-07 03:22:27.607088 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:22:27.607108 | orchestrator | 2026-04-07 03:22:27.607127 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-07 03:22:27.607174 | orchestrator | Tuesday 07 April 2026 03:22:22 +0000 (0:00:00.356) 0:00:43.849 ********* 2026-04-07 03:22:27.607193 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:22:27.607212 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:22:27.607233 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:22:27.607256 | orchestrator | 2026-04-07 03:22:27.607276 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-07 03:22:27.607294 | orchestrator | Tuesday 07 April 2026 03:22:23 +0000 (0:00:00.359) 0:00:44.209 ********* 2026-04-07 03:22:27.607366 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:22:27.607388 | orchestrator | 2026-04-07 03:22:27.607405 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-07 03:22:27.607424 | orchestrator | Tuesday 07 April 2026 03:22:23 +0000 (0:00:00.618) 0:00:44.828 ********* 2026-04-07 03:22:27.607445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:27.607492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:27.607513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:27.607542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:27.607591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:27.607612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:27.607630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:27.607664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:29.075741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:29.075848 | orchestrator | 2026-04-07 03:22:29.075862 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-07 03:22:29.075872 | orchestrator | Tuesday 07 April 2026 03:22:27 +0000 (0:00:03.745) 0:00:48.573 ********* 2026-04-07 03:22:29.075903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:29.075933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:29.075943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:29.075951 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:22:29.075962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:29.075985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:29.076000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:29.076017 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:22:29.076031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:29.076041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:29.076050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:29.076059 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:22:29.076069 | orchestrator | 2026-04-07 03:22:29.076077 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-07 03:22:29.076085 | orchestrator | Tuesday 07 April 2026 03:22:28 +0000 (0:00:00.624) 0:00:49.197 ********* 2026-04-07 03:22:29.076101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:32.611567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:32.611715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:32.611740 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:22:32.611782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:32.611800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:32.611815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:32.611829 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:22:32.611872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:32.611921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:32.611945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:32.611961 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:22:32.611976 | orchestrator | 2026-04-07 03:22:32.611993 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-07 03:22:32.612009 | orchestrator | Tuesday 07 April 2026 03:22:29 +0000 (0:00:00.856) 0:00:50.054 ********* 2026-04-07 03:22:32.612023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:32.612038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:32.612074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:42.734212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:42.734423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:42.734443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:42.734456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:42.734469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:42.734506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:42.734527 | orchestrator | 2026-04-07 03:22:42.734548 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-07 03:22:42.734567 | orchestrator | Tuesday 07 April 2026 03:22:32 +0000 (0:00:03.528) 0:00:53.582 ********* 2026-04-07 03:22:42.734587 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:22:42.734608 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:22:42.734630 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:22:42.734651 | orchestrator | 2026-04-07 03:22:42.734682 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-07 03:22:42.734694 | orchestrator | Tuesday 07 April 2026 03:22:34 +0000 (0:00:01.590) 0:00:55.173 ********* 2026-04-07 03:22:42.734706 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:22:42.734717 | orchestrator | 2026-04-07 03:22:42.734728 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-07 03:22:42.734745 | orchestrator | Tuesday 07 April 2026 03:22:35 +0000 (0:00:01.005) 0:00:56.179 ********* 2026-04-07 03:22:42.734763 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:22:42.734780 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:22:42.734799 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:22:42.734817 | orchestrator | 2026-04-07 03:22:42.734833 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-07 03:22:42.734851 | orchestrator | Tuesday 07 April 2026 03:22:35 +0000 (0:00:00.572) 0:00:56.751 ********* 2026-04-07 03:22:42.734910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:42.734935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:42.734969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:42.734992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:43.666169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:43.666389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:43.666413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:43.666448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:43.666460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:43.666472 | orchestrator | 2026-04-07 03:22:43.666485 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-07 03:22:43.666498 | orchestrator | Tuesday 07 April 2026 03:22:42 +0000 (0:00:06.958) 0:01:03.709 ********* 2026-04-07 03:22:43.666527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:43.666546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:43.666559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:43.666570 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:22:43.666584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:43.666608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:43.666622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:43.666636 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:22:43.666657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-07 03:22:46.099916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:22:46.100005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:22:46.100039 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:22:46.100049 | orchestrator | 2026-04-07 03:22:46.100057 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-07 03:22:46.100065 | orchestrator | Tuesday 07 April 2026 03:22:43 +0000 (0:00:00.929) 0:01:04.639 ********* 2026-04-07 03:22:46.100072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:46.100080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:46.100100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-07 03:22:46.100112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:46.100126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:46.100132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:46.100139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:46.100146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:46.100153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:22:46.100159 | orchestrator | 2026-04-07 03:22:46.100166 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-07 03:22:46.100176 | orchestrator | Tuesday 07 April 2026 03:22:46 +0000 (0:00:02.424) 0:01:07.064 ********* 2026-04-07 03:23:31.860407 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:23:31.860564 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:23:31.860603 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:23:31.860621 | orchestrator | 2026-04-07 03:23:31.860660 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-07 03:23:31.860701 | orchestrator | Tuesday 07 April 2026 03:22:46 +0000 (0:00:00.348) 0:01:07.412 ********* 2026-04-07 03:23:31.860719 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:23:31.860734 | orchestrator | 2026-04-07 03:23:31.860750 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-07 03:23:31.860766 | orchestrator | Tuesday 07 April 2026 03:22:48 +0000 (0:00:02.367) 0:01:09.779 ********* 2026-04-07 03:23:31.860782 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:23:31.860800 | orchestrator | 2026-04-07 03:23:31.860816 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-07 03:23:31.860830 | orchestrator | Tuesday 07 April 2026 03:22:51 +0000 (0:00:02.428) 0:01:12.208 ********* 2026-04-07 03:23:31.860843 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:23:31.860864 | orchestrator | 2026-04-07 03:23:31.860879 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-07 03:23:31.860891 | orchestrator | Tuesday 07 April 2026 03:23:04 +0000 (0:00:13.388) 0:01:25.596 ********* 2026-04-07 03:23:31.860904 | orchestrator | 2026-04-07 03:23:31.860917 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-07 03:23:31.860929 | orchestrator | Tuesday 07 April 2026 03:23:04 +0000 (0:00:00.082) 0:01:25.678 ********* 2026-04-07 03:23:31.860941 | orchestrator | 2026-04-07 03:23:31.860953 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-07 03:23:31.860965 | orchestrator | Tuesday 07 April 2026 03:23:04 +0000 (0:00:00.082) 0:01:25.761 ********* 2026-04-07 03:23:31.860978 | orchestrator | 2026-04-07 03:23:31.860992 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-07 03:23:31.861006 | orchestrator | Tuesday 07 April 2026 03:23:04 +0000 (0:00:00.072) 0:01:25.834 ********* 2026-04-07 03:23:31.861019 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:23:31.861032 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:23:31.861046 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:23:31.861059 | orchestrator | 2026-04-07 03:23:31.861073 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-07 03:23:31.861087 | orchestrator | Tuesday 07 April 2026 03:23:11 +0000 (0:00:06.774) 0:01:32.608 ********* 2026-04-07 03:23:31.861099 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:23:31.861113 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:23:31.861128 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:23:31.861141 | orchestrator | 2026-04-07 03:23:31.861154 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-07 03:23:31.861170 | orchestrator | Tuesday 07 April 2026 03:23:21 +0000 (0:00:09.828) 0:01:42.436 ********* 2026-04-07 03:23:31.861185 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:23:31.861230 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:23:31.861246 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:23:31.861254 | orchestrator | 2026-04-07 03:23:31.861262 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:23:31.861272 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 03:23:31.861281 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 03:23:31.861289 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 03:23:31.861297 | orchestrator | 2026-04-07 03:23:31.861305 | orchestrator | 2026-04-07 03:23:31.861313 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:23:31.861321 | orchestrator | Tuesday 07 April 2026 03:23:31 +0000 (0:00:10.024) 0:01:52.461 ********* 2026-04-07 03:23:31.861329 | orchestrator | =============================================================================== 2026-04-07 03:23:31.861336 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.43s 2026-04-07 03:23:31.861356 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.39s 2026-04-07 03:23:31.861363 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.02s 2026-04-07 03:23:31.861371 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.83s 2026-04-07 03:23:31.861379 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.96s 2026-04-07 03:23:31.861387 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.87s 2026-04-07 03:23:31.861394 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.77s 2026-04-07 03:23:31.861402 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.39s 2026-04-07 03:23:31.861410 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.05s 2026-04-07 03:23:31.861418 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.75s 2026-04-07 03:23:31.861425 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.69s 2026-04-07 03:23:31.861433 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.53s 2026-04-07 03:23:31.861441 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.52s 2026-04-07 03:23:31.861448 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.43s 2026-04-07 03:23:31.861457 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.42s 2026-04-07 03:23:31.861484 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.37s 2026-04-07 03:23:31.861493 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.71s 2026-04-07 03:23:31.861508 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.59s 2026-04-07 03:23:31.861516 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.16s 2026-04-07 03:23:31.861524 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.01s 2026-04-07 03:23:34.360548 | orchestrator | 2026-04-07 03:23:34 | INFO  | Task 3bfb4c8d-83a7-4240-a6f1-83c2e368951d (designate) was prepared for execution. 2026-04-07 03:23:34.360618 | orchestrator | 2026-04-07 03:23:34 | INFO  | It takes a moment until task 3bfb4c8d-83a7-4240-a6f1-83c2e368951d (designate) has been started and output is visible here. 2026-04-07 03:24:07.900594 | orchestrator | 2026-04-07 03:24:07.900694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:24:07.900707 | orchestrator | 2026-04-07 03:24:07.900715 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:24:07.900723 | orchestrator | Tuesday 07 April 2026 03:23:38 +0000 (0:00:00.296) 0:00:00.296 ********* 2026-04-07 03:24:07.900730 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:24:07.900737 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:24:07.900744 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:24:07.900750 | orchestrator | 2026-04-07 03:24:07.900756 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:24:07.900762 | orchestrator | Tuesday 07 April 2026 03:23:39 +0000 (0:00:00.349) 0:00:00.646 ********* 2026-04-07 03:24:07.900769 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-07 03:24:07.900776 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-07 03:24:07.900783 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-07 03:24:07.900790 | orchestrator | 2026-04-07 03:24:07.900798 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-07 03:24:07.900804 | orchestrator | 2026-04-07 03:24:07.900811 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-07 03:24:07.900817 | orchestrator | Tuesday 07 April 2026 03:23:39 +0000 (0:00:00.504) 0:00:01.150 ********* 2026-04-07 03:24:07.900825 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:24:07.900853 | orchestrator | 2026-04-07 03:24:07.900859 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-07 03:24:07.900865 | orchestrator | Tuesday 07 April 2026 03:23:40 +0000 (0:00:00.628) 0:00:01.779 ********* 2026-04-07 03:24:07.900870 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-07 03:24:07.900876 | orchestrator | 2026-04-07 03:24:07.900881 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-07 03:24:07.900887 | orchestrator | Tuesday 07 April 2026 03:23:43 +0000 (0:00:03.467) 0:00:05.246 ********* 2026-04-07 03:24:07.900893 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-07 03:24:07.900900 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-07 03:24:07.900906 | orchestrator | 2026-04-07 03:24:07.900913 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-07 03:24:07.900919 | orchestrator | Tuesday 07 April 2026 03:23:50 +0000 (0:00:06.973) 0:00:12.220 ********* 2026-04-07 03:24:07.900925 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:24:07.900930 | orchestrator | 2026-04-07 03:24:07.900936 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-07 03:24:07.900942 | orchestrator | Tuesday 07 April 2026 03:23:54 +0000 (0:00:03.434) 0:00:15.654 ********* 2026-04-07 03:24:07.900948 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:24:07.900953 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-07 03:24:07.900959 | orchestrator | 2026-04-07 03:24:07.900964 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-07 03:24:07.900971 | orchestrator | Tuesday 07 April 2026 03:23:58 +0000 (0:00:04.361) 0:00:20.016 ********* 2026-04-07 03:24:07.900977 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:24:07.900983 | orchestrator | 2026-04-07 03:24:07.900989 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-07 03:24:07.900995 | orchestrator | Tuesday 07 April 2026 03:24:01 +0000 (0:00:03.385) 0:00:23.401 ********* 2026-04-07 03:24:07.901002 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-07 03:24:07.901007 | orchestrator | 2026-04-07 03:24:07.901013 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-07 03:24:07.901019 | orchestrator | Tuesday 07 April 2026 03:24:05 +0000 (0:00:03.912) 0:00:27.313 ********* 2026-04-07 03:24:07.901042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:07.901072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:07.901086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:07.901094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:07.901102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:07.901108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:07.901118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:07.901131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:14.434705 | orchestrator | 2026-04-07 03:24:14.434711 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-07 03:24:14.434718 | orchestrator | Tuesday 07 April 2026 03:24:08 +0000 (0:00:02.914) 0:00:30.228 ********* 2026-04-07 03:24:14.434723 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:24:14.434729 | orchestrator | 2026-04-07 03:24:14.434735 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-07 03:24:14.434740 | orchestrator | Tuesday 07 April 2026 03:24:08 +0000 (0:00:00.135) 0:00:30.364 ********* 2026-04-07 03:24:14.434745 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:24:14.434750 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:24:14.434756 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:24:14.434761 | orchestrator | 2026-04-07 03:24:14.434766 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-07 03:24:14.434771 | orchestrator | Tuesday 07 April 2026 03:24:09 +0000 (0:00:00.566) 0:00:30.930 ********* 2026-04-07 03:24:14.434776 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:24:14.434782 | orchestrator | 2026-04-07 03:24:14.434787 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-07 03:24:14.434796 | orchestrator | Tuesday 07 April 2026 03:24:10 +0000 (0:00:00.588) 0:00:31.519 ********* 2026-04-07 03:24:14.434805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:14.434817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:16.384986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:16.385085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:16.385323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:17.445569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:17.445669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:17.445685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:17.445721 | orchestrator | 2026-04-07 03:24:17.445736 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-07 03:24:17.445749 | orchestrator | Tuesday 07 April 2026 03:24:16 +0000 (0:00:06.313) 0:00:37.832 ********* 2026-04-07 03:24:17.445778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:17.445791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:17.445821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:17.445833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:17.445845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:17.445857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:17.445877 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:24:17.445896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:17.445908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:17.445922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:17.445952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.241158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.241279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.241293 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:24:18.241320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:18.241332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:18.241343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.241353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.241379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.241396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.241405 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:24:18.241415 | orchestrator | 2026-04-07 03:24:18.241425 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-07 03:24:18.241436 | orchestrator | Tuesday 07 April 2026 03:24:17 +0000 (0:00:01.176) 0:00:39.009 ********* 2026-04-07 03:24:18.241449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:18.241459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:18.241468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.241483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.580924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.581038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.581060 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:24:18.581098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:18.581117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:18.581167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.581184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.581244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.581260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.581273 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:24:18.581294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:18.581308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:18.581321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.581335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:18.581370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:22.935602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:22.935710 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:24:22.935727 | orchestrator | 2026-04-07 03:24:22.935739 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-07 03:24:22.935750 | orchestrator | Tuesday 07 April 2026 03:24:18 +0000 (0:00:01.018) 0:00:40.028 ********* 2026-04-07 03:24:22.935788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:22.935801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:22.935812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:22.935859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:22.935873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:22.935889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:22.935899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:22.935910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:22.935921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:22.935939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:22.935958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087232 | orchestrator | 2026-04-07 03:24:35.087237 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-07 03:24:35.087242 | orchestrator | Tuesday 07 April 2026 03:24:24 +0000 (0:00:06.256) 0:00:46.284 ********* 2026-04-07 03:24:35.087250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:35.087256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:35.087264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:35.087269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:35.087279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:43.843390 | orchestrator | 2026-04-07 03:24:43.843395 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-07 03:24:43.843401 | orchestrator | Tuesday 07 April 2026 03:24:39 +0000 (0:00:14.968) 0:01:01.253 ********* 2026-04-07 03:24:43.843408 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-07 03:24:48.421334 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-07 03:24:48.421435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-07 03:24:48.421447 | orchestrator | 2026-04-07 03:24:48.421459 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-07 03:24:48.421469 | orchestrator | Tuesday 07 April 2026 03:24:43 +0000 (0:00:04.038) 0:01:05.292 ********* 2026-04-07 03:24:48.421479 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-07 03:24:48.421488 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-07 03:24:48.421498 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-07 03:24:48.421509 | orchestrator | 2026-04-07 03:24:48.421520 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-07 03:24:48.421550 | orchestrator | Tuesday 07 April 2026 03:24:46 +0000 (0:00:02.635) 0:01:07.927 ********* 2026-04-07 03:24:48.421565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:48.421608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:48.421622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:48.421651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:48.421664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:48.421684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:48.421705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:48.421716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:48.421726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:48.421735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:48.421752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:51.326885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:51.327051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:51.327115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:51.327138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:51.327159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:51.327180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:51.327225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:51.327262 | orchestrator | 2026-04-07 03:24:51.327283 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-07 03:24:51.327302 | orchestrator | Tuesday 07 April 2026 03:24:49 +0000 (0:00:03.017) 0:01:10.945 ********* 2026-04-07 03:24:51.327331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:51.327351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:51.327370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:51.327388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:51.327420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:52.597049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:52.597289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:52.597308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:52.597315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:52.597321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:52.597332 | orchestrator | 2026-04-07 03:24:52.597339 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-07 03:24:52.597347 | orchestrator | Tuesday 07 April 2026 03:24:52 +0000 (0:00:02.791) 0:01:13.737 ********* 2026-04-07 03:24:52.597353 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:24:52.597363 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:24:53.337003 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:24:53.337186 | orchestrator | 2026-04-07 03:24:53.337208 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-07 03:24:53.337221 | orchestrator | Tuesday 07 April 2026 03:24:52 +0000 (0:00:00.311) 0:01:14.048 ********* 2026-04-07 03:24:53.337253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:53.337270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:53.337284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:53.337297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:53.337330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:53.337361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:53.337374 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:24:53.337392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:53.337404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:53.337416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:53.337427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:53.337446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:53.337467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:56.947438 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:24:56.947553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-07 03:24:56.947572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 03:24:56.947583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 03:24:56.947593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 03:24:56.947621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 03:24:56.947631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:24:56.947640 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:24:56.947650 | orchestrator | 2026-04-07 03:24:56.947675 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-07 03:24:56.947686 | orchestrator | Tuesday 07 April 2026 03:24:53 +0000 (0:00:00.853) 0:01:14.901 ********* 2026-04-07 03:24:56.947700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:56.947710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:56.947720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-07 03:24:56.947736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:56.947751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:24:58.950716 | orchestrator | 2026-04-07 03:24:58.950724 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-07 03:24:58.950732 | orchestrator | Tuesday 07 April 2026 03:24:58 +0000 (0:00:05.158) 0:01:20.060 ********* 2026-04-07 03:24:58.950739 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:24:58.950751 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:26:22.756928 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:26:22.757101 | orchestrator | 2026-04-07 03:26:22.757139 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-07 03:26:22.757177 | orchestrator | Tuesday 07 April 2026 03:24:58 +0000 (0:00:00.337) 0:01:20.397 ********* 2026-04-07 03:26:22.757196 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-07 03:26:22.757213 | orchestrator | 2026-04-07 03:26:22.757230 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-07 03:26:22.757241 | orchestrator | Tuesday 07 April 2026 03:25:01 +0000 (0:00:02.290) 0:01:22.687 ********* 2026-04-07 03:26:22.757251 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-07 03:26:22.757261 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-07 03:26:22.757271 | orchestrator | 2026-04-07 03:26:22.757280 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-07 03:26:22.757290 | orchestrator | Tuesday 07 April 2026 03:25:03 +0000 (0:00:02.409) 0:01:25.097 ********* 2026-04-07 03:26:22.757300 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:26:22.757309 | orchestrator | 2026-04-07 03:26:22.757319 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-07 03:26:22.757329 | orchestrator | Tuesday 07 April 2026 03:25:20 +0000 (0:00:17.030) 0:01:42.128 ********* 2026-04-07 03:26:22.757338 | orchestrator | 2026-04-07 03:26:22.757348 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-07 03:26:22.757357 | orchestrator | Tuesday 07 April 2026 03:25:20 +0000 (0:00:00.077) 0:01:42.205 ********* 2026-04-07 03:26:22.757367 | orchestrator | 2026-04-07 03:26:22.757399 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-07 03:26:22.757409 | orchestrator | Tuesday 07 April 2026 03:25:20 +0000 (0:00:00.093) 0:01:42.299 ********* 2026-04-07 03:26:22.757418 | orchestrator | 2026-04-07 03:26:22.757428 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-07 03:26:22.757437 | orchestrator | Tuesday 07 April 2026 03:25:20 +0000 (0:00:00.076) 0:01:42.375 ********* 2026-04-07 03:26:22.757448 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:26:22.757458 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:26:22.757468 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:26:22.757479 | orchestrator | 2026-04-07 03:26:22.757491 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-07 03:26:22.757503 | orchestrator | Tuesday 07 April 2026 03:25:29 +0000 (0:00:08.353) 0:01:50.729 ********* 2026-04-07 03:26:22.757514 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:26:22.757525 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:26:22.757536 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:26:22.757547 | orchestrator | 2026-04-07 03:26:22.757557 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-07 03:26:22.757568 | orchestrator | Tuesday 07 April 2026 03:25:39 +0000 (0:00:10.717) 0:02:01.446 ********* 2026-04-07 03:26:22.757579 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:26:22.757590 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:26:22.757601 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:26:22.757611 | orchestrator | 2026-04-07 03:26:22.757622 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-07 03:26:22.757633 | orchestrator | Tuesday 07 April 2026 03:25:46 +0000 (0:00:06.069) 0:02:07.516 ********* 2026-04-07 03:26:22.757645 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:26:22.757655 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:26:22.757666 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:26:22.757677 | orchestrator | 2026-04-07 03:26:22.757689 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-07 03:26:22.757700 | orchestrator | Tuesday 07 April 2026 03:25:57 +0000 (0:00:10.952) 0:02:18.468 ********* 2026-04-07 03:26:22.757711 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:26:22.757721 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:26:22.757732 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:26:22.757744 | orchestrator | 2026-04-07 03:26:22.757755 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-07 03:26:22.757766 | orchestrator | Tuesday 07 April 2026 03:26:08 +0000 (0:00:11.014) 0:02:29.482 ********* 2026-04-07 03:26:22.757777 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:26:22.757788 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:26:22.757799 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:26:22.757810 | orchestrator | 2026-04-07 03:26:22.757821 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-07 03:26:22.757831 | orchestrator | Tuesday 07 April 2026 03:26:14 +0000 (0:00:06.028) 0:02:35.511 ********* 2026-04-07 03:26:22.757841 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:26:22.757850 | orchestrator | 2026-04-07 03:26:22.757860 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:26:22.757871 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 03:26:22.757882 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 03:26:22.757891 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 03:26:22.757900 | orchestrator | 2026-04-07 03:26:22.757910 | orchestrator | 2026-04-07 03:26:22.757919 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:26:22.757936 | orchestrator | Tuesday 07 April 2026 03:26:22 +0000 (0:00:08.236) 0:02:43.747 ********* 2026-04-07 03:26:22.757945 | orchestrator | =============================================================================== 2026-04-07 03:26:22.757982 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.03s 2026-04-07 03:26:22.757993 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.97s 2026-04-07 03:26:22.758072 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.01s 2026-04-07 03:26:22.758088 | orchestrator | designate : Restart designate-producer container ----------------------- 10.95s 2026-04-07 03:26:22.758113 | orchestrator | designate : Restart designate-api container ---------------------------- 10.72s 2026-04-07 03:26:22.758130 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.35s 2026-04-07 03:26:22.758146 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.24s 2026-04-07 03:26:22.758162 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.97s 2026-04-07 03:26:22.758178 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.31s 2026-04-07 03:26:22.758194 | orchestrator | designate : Copying over config.json files for services ----------------- 6.26s 2026-04-07 03:26:22.758209 | orchestrator | designate : Restart designate-central container ------------------------- 6.07s 2026-04-07 03:26:22.758226 | orchestrator | designate : Restart designate-worker container -------------------------- 6.03s 2026-04-07 03:26:22.758242 | orchestrator | designate : Check designate containers ---------------------------------- 5.16s 2026-04-07 03:26:22.758258 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.36s 2026-04-07 03:26:22.758275 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.04s 2026-04-07 03:26:22.758288 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.91s 2026-04-07 03:26:22.758298 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.47s 2026-04-07 03:26:22.758307 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.43s 2026-04-07 03:26:22.758317 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.39s 2026-04-07 03:26:22.758326 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.02s 2026-04-07 03:26:25.393457 | orchestrator | 2026-04-07 03:26:25 | INFO  | Task 1f6726f8-b00f-4dbf-aae2-3f15fc7b4d08 (octavia) was prepared for execution. 2026-04-07 03:26:25.393569 | orchestrator | 2026-04-07 03:26:25 | INFO  | It takes a moment until task 1f6726f8-b00f-4dbf-aae2-3f15fc7b4d08 (octavia) has been started and output is visible here. 2026-04-07 03:28:40.632614 | orchestrator | 2026-04-07 03:28:40.632846 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:28:40.632916 | orchestrator | 2026-04-07 03:28:40.632932 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:28:40.632945 | orchestrator | Tuesday 07 April 2026 03:26:30 +0000 (0:00:00.284) 0:00:00.284 ********* 2026-04-07 03:28:40.632956 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:28:40.632968 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:28:40.632980 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:28:40.632990 | orchestrator | 2026-04-07 03:28:40.633002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:28:40.633013 | orchestrator | Tuesday 07 April 2026 03:26:30 +0000 (0:00:00.361) 0:00:00.645 ********* 2026-04-07 03:28:40.633023 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-07 03:28:40.633035 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-07 03:28:40.633046 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-07 03:28:40.633057 | orchestrator | 2026-04-07 03:28:40.633069 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-07 03:28:40.633080 | orchestrator | 2026-04-07 03:28:40.633091 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 03:28:40.633127 | orchestrator | Tuesday 07 April 2026 03:26:30 +0000 (0:00:00.498) 0:00:01.143 ********* 2026-04-07 03:28:40.633142 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:28:40.633157 | orchestrator | 2026-04-07 03:28:40.633170 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-07 03:28:40.633182 | orchestrator | Tuesday 07 April 2026 03:26:31 +0000 (0:00:00.606) 0:00:01.750 ********* 2026-04-07 03:28:40.633195 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-07 03:28:40.633208 | orchestrator | 2026-04-07 03:28:40.633221 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-07 03:28:40.633234 | orchestrator | Tuesday 07 April 2026 03:26:35 +0000 (0:00:03.729) 0:00:05.479 ********* 2026-04-07 03:28:40.633246 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-07 03:28:40.633260 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-07 03:28:40.633272 | orchestrator | 2026-04-07 03:28:40.633284 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-07 03:28:40.633297 | orchestrator | Tuesday 07 April 2026 03:26:42 +0000 (0:00:06.872) 0:00:12.351 ********* 2026-04-07 03:28:40.633309 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:28:40.633322 | orchestrator | 2026-04-07 03:28:40.633334 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-07 03:28:40.633347 | orchestrator | Tuesday 07 April 2026 03:26:45 +0000 (0:00:03.367) 0:00:15.719 ********* 2026-04-07 03:28:40.633360 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:28:40.633373 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-07 03:28:40.633386 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-07 03:28:40.633399 | orchestrator | 2026-04-07 03:28:40.633412 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-07 03:28:40.633424 | orchestrator | Tuesday 07 April 2026 03:26:54 +0000 (0:00:08.784) 0:00:24.504 ********* 2026-04-07 03:28:40.633437 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:28:40.633450 | orchestrator | 2026-04-07 03:28:40.633462 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-07 03:28:40.633487 | orchestrator | Tuesday 07 April 2026 03:26:57 +0000 (0:00:03.542) 0:00:28.047 ********* 2026-04-07 03:28:40.633499 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-07 03:28:40.633510 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-07 03:28:40.633521 | orchestrator | 2026-04-07 03:28:40.633531 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-07 03:28:40.633542 | orchestrator | Tuesday 07 April 2026 03:27:05 +0000 (0:00:07.561) 0:00:35.608 ********* 2026-04-07 03:28:40.633553 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-07 03:28:40.633564 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-07 03:28:40.633575 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-07 03:28:40.633586 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-07 03:28:40.633597 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-07 03:28:40.633607 | orchestrator | 2026-04-07 03:28:40.633618 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 03:28:40.633636 | orchestrator | Tuesday 07 April 2026 03:27:21 +0000 (0:00:16.587) 0:00:52.196 ********* 2026-04-07 03:28:40.633654 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:28:40.633672 | orchestrator | 2026-04-07 03:28:40.633690 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-07 03:28:40.633720 | orchestrator | Tuesday 07 April 2026 03:27:22 +0000 (0:00:00.861) 0:00:53.057 ********* 2026-04-07 03:28:40.633736 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.633753 | orchestrator | 2026-04-07 03:28:40.633771 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-07 03:28:40.633866 | orchestrator | Tuesday 07 April 2026 03:27:27 +0000 (0:00:05.141) 0:00:58.199 ********* 2026-04-07 03:28:40.633883 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.633900 | orchestrator | 2026-04-07 03:28:40.633919 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-07 03:28:40.633959 | orchestrator | Tuesday 07 April 2026 03:27:33 +0000 (0:00:05.199) 0:01:03.399 ********* 2026-04-07 03:28:40.633975 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:28:40.633991 | orchestrator | 2026-04-07 03:28:40.634006 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-07 03:28:40.634095 | orchestrator | Tuesday 07 April 2026 03:27:36 +0000 (0:00:03.294) 0:01:06.693 ********* 2026-04-07 03:28:40.634115 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-07 03:28:40.634134 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-07 03:28:40.634153 | orchestrator | 2026-04-07 03:28:40.634170 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-07 03:28:40.634197 | orchestrator | Tuesday 07 April 2026 03:27:46 +0000 (0:00:10.338) 0:01:17.032 ********* 2026-04-07 03:28:40.634216 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-07 03:28:40.634234 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-07 03:28:40.634255 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-07 03:28:40.634275 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-07 03:28:40.634293 | orchestrator | 2026-04-07 03:28:40.634304 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-07 03:28:40.634315 | orchestrator | Tuesday 07 April 2026 03:28:04 +0000 (0:00:17.495) 0:01:34.527 ********* 2026-04-07 03:28:40.634330 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.634341 | orchestrator | 2026-04-07 03:28:40.634352 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-07 03:28:40.634363 | orchestrator | Tuesday 07 April 2026 03:28:09 +0000 (0:00:05.022) 0:01:39.550 ********* 2026-04-07 03:28:40.634374 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.634385 | orchestrator | 2026-04-07 03:28:40.634395 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-07 03:28:40.634406 | orchestrator | Tuesday 07 April 2026 03:28:15 +0000 (0:00:05.791) 0:01:45.341 ********* 2026-04-07 03:28:40.634417 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:28:40.634427 | orchestrator | 2026-04-07 03:28:40.634438 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-07 03:28:40.634449 | orchestrator | Tuesday 07 April 2026 03:28:15 +0000 (0:00:00.239) 0:01:45.580 ********* 2026-04-07 03:28:40.634460 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:28:40.634471 | orchestrator | 2026-04-07 03:28:40.634482 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 03:28:40.634493 | orchestrator | Tuesday 07 April 2026 03:28:19 +0000 (0:00:04.615) 0:01:50.196 ********* 2026-04-07 03:28:40.634504 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:28:40.634515 | orchestrator | 2026-04-07 03:28:40.634526 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-07 03:28:40.634536 | orchestrator | Tuesday 07 April 2026 03:28:21 +0000 (0:00:01.174) 0:01:51.370 ********* 2026-04-07 03:28:40.634559 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:28:40.634570 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:28:40.634581 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.634592 | orchestrator | 2026-04-07 03:28:40.634603 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-07 03:28:40.634622 | orchestrator | Tuesday 07 April 2026 03:28:26 +0000 (0:00:05.785) 0:01:57.156 ********* 2026-04-07 03:28:40.634633 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.634644 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:28:40.634655 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:28:40.634666 | orchestrator | 2026-04-07 03:28:40.634677 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-07 03:28:40.634688 | orchestrator | Tuesday 07 April 2026 03:28:31 +0000 (0:00:04.631) 0:02:01.788 ********* 2026-04-07 03:28:40.634699 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.634710 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:28:40.634721 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:28:40.634732 | orchestrator | 2026-04-07 03:28:40.634742 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-07 03:28:40.634753 | orchestrator | Tuesday 07 April 2026 03:28:32 +0000 (0:00:01.131) 0:02:02.919 ********* 2026-04-07 03:28:40.634764 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:28:40.634833 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:28:40.634845 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:28:40.634856 | orchestrator | 2026-04-07 03:28:40.634867 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-07 03:28:40.634878 | orchestrator | Tuesday 07 April 2026 03:28:34 +0000 (0:00:02.038) 0:02:04.958 ********* 2026-04-07 03:28:40.634889 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:28:40.634900 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:28:40.634910 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.634921 | orchestrator | 2026-04-07 03:28:40.634932 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-07 03:28:40.634943 | orchestrator | Tuesday 07 April 2026 03:28:35 +0000 (0:00:01.292) 0:02:06.250 ********* 2026-04-07 03:28:40.634954 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.634965 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:28:40.634976 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:28:40.634987 | orchestrator | 2026-04-07 03:28:40.634998 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-07 03:28:40.635008 | orchestrator | Tuesday 07 April 2026 03:28:37 +0000 (0:00:01.236) 0:02:07.487 ********* 2026-04-07 03:28:40.635019 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:28:40.635030 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:28:40.635041 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:28:40.635052 | orchestrator | 2026-04-07 03:28:40.635075 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-07 03:29:07.699308 | orchestrator | Tuesday 07 April 2026 03:28:40 +0000 (0:00:03.399) 0:02:10.886 ********* 2026-04-07 03:29:07.699413 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:29:07.699428 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:29:07.699439 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:29:07.699450 | orchestrator | 2026-04-07 03:29:07.699461 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-07 03:29:07.699472 | orchestrator | Tuesday 07 April 2026 03:28:42 +0000 (0:00:01.527) 0:02:12.414 ********* 2026-04-07 03:29:07.699482 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:29:07.699494 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:29:07.699504 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:29:07.699515 | orchestrator | 2026-04-07 03:29:07.699525 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-07 03:29:07.699535 | orchestrator | Tuesday 07 April 2026 03:28:42 +0000 (0:00:00.704) 0:02:13.118 ********* 2026-04-07 03:29:07.699546 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:29:07.699581 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:29:07.699591 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:29:07.699601 | orchestrator | 2026-04-07 03:29:07.699612 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 03:29:07.699622 | orchestrator | Tuesday 07 April 2026 03:28:46 +0000 (0:00:03.164) 0:02:16.283 ********* 2026-04-07 03:29:07.699633 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:29:07.699643 | orchestrator | 2026-04-07 03:29:07.699653 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-07 03:29:07.699663 | orchestrator | Tuesday 07 April 2026 03:28:46 +0000 (0:00:00.607) 0:02:16.890 ********* 2026-04-07 03:29:07.699673 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:29:07.699683 | orchestrator | 2026-04-07 03:29:07.699694 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-07 03:29:07.699704 | orchestrator | Tuesday 07 April 2026 03:28:50 +0000 (0:00:04.234) 0:02:21.125 ********* 2026-04-07 03:29:07.699714 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:29:07.699724 | orchestrator | 2026-04-07 03:29:07.699734 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-07 03:29:07.699790 | orchestrator | Tuesday 07 April 2026 03:28:54 +0000 (0:00:03.500) 0:02:24.625 ********* 2026-04-07 03:29:07.699801 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-07 03:29:07.699812 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-07 03:29:07.699823 | orchestrator | 2026-04-07 03:29:07.699833 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-07 03:29:07.699844 | orchestrator | Tuesday 07 April 2026 03:29:01 +0000 (0:00:07.105) 0:02:31.731 ********* 2026-04-07 03:29:07.699855 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:29:07.699865 | orchestrator | 2026-04-07 03:29:07.699875 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-07 03:29:07.699885 | orchestrator | Tuesday 07 April 2026 03:29:05 +0000 (0:00:03.550) 0:02:35.282 ********* 2026-04-07 03:29:07.699896 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:29:07.699906 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:29:07.699916 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:29:07.699926 | orchestrator | 2026-04-07 03:29:07.699937 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-07 03:29:07.699946 | orchestrator | Tuesday 07 April 2026 03:29:05 +0000 (0:00:00.560) 0:02:35.842 ********* 2026-04-07 03:29:07.699976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:07.700007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:07.700038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:07.700050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:07.700063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:07.700077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:07.700088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:07.700101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:07.700124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:09.310833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:09.310946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:09.310964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:09.311001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:09.311020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:09.311062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:09.311079 | orchestrator | 2026-04-07 03:29:09.311097 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-07 03:29:09.311113 | orchestrator | Tuesday 07 April 2026 03:29:08 +0000 (0:00:02.585) 0:02:38.427 ********* 2026-04-07 03:29:09.311128 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:29:09.311144 | orchestrator | 2026-04-07 03:29:09.311159 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-07 03:29:09.311174 | orchestrator | Tuesday 07 April 2026 03:29:08 +0000 (0:00:00.156) 0:02:38.584 ********* 2026-04-07 03:29:09.311188 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:29:09.311224 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:29:09.311239 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:29:09.311254 | orchestrator | 2026-04-07 03:29:09.311269 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-07 03:29:09.311284 | orchestrator | Tuesday 07 April 2026 03:29:08 +0000 (0:00:00.376) 0:02:38.960 ********* 2026-04-07 03:29:09.311300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:09.311318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:09.311340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:09.311357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:09.311386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:09.311402 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:29:09.311427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:14.490649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:14.490825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:14.490871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:14.490925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:14.490961 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:29:14.490976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:14.490990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:14.491022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:14.491035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:14.491053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:14.491072 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:29:14.491084 | orchestrator | 2026-04-07 03:29:14.491096 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 03:29:14.491109 | orchestrator | Tuesday 07 April 2026 03:29:09 +0000 (0:00:00.708) 0:02:39.669 ********* 2026-04-07 03:29:14.491121 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:29:14.491132 | orchestrator | 2026-04-07 03:29:14.491146 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-07 03:29:14.491158 | orchestrator | Tuesday 07 April 2026 03:29:10 +0000 (0:00:00.857) 0:02:40.527 ********* 2026-04-07 03:29:14.491203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:14.491219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:14.491243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:16.104456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:16.104596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:16.104614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:16.104628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:16.104813 | orchestrator | 2026-04-07 03:29:16.104827 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-07 03:29:16.104840 | orchestrator | Tuesday 07 April 2026 03:29:15 +0000 (0:00:05.239) 0:02:45.767 ********* 2026-04-07 03:29:16.104863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:16.231504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:16.231606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:16.231618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:16.231628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:16.231637 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:29:16.231648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:16.231658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:16.231699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:16.231713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:16.231721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:16.231729 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:29:16.231789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:16.231797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:16.231806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:16.231828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:17.099108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:17.099211 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:29:17.099232 | orchestrator | 2026-04-07 03:29:17.099248 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-07 03:29:17.099262 | orchestrator | Tuesday 07 April 2026 03:29:16 +0000 (0:00:00.727) 0:02:46.494 ********* 2026-04-07 03:29:17.099276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:17.099293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:17.099307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:17.099323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:17.099382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:17.099392 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:29:17.099409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:17.099417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:17.099425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:17.099434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:17.099448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:17.099456 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:29:17.099479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 03:29:21.988395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 03:29:21.988483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 03:29:21.988495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 03:29:21.988504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 03:29:21.988530 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:29:21.988539 | orchestrator | 2026-04-07 03:29:21.988547 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-07 03:29:21.988554 | orchestrator | Tuesday 07 April 2026 03:29:17 +0000 (0:00:01.390) 0:02:47.885 ********* 2026-04-07 03:29:21.988562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:21.988600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:21.988614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:21.988624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:21.988631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:21.988644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:21.988651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:21.988666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:39.027195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:39.027287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:39.027298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:39.027323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:39.027331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:39.027339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:39.027368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:39.027376 | orchestrator | 2026-04-07 03:29:39.027384 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-07 03:29:39.027392 | orchestrator | Tuesday 07 April 2026 03:29:23 +0000 (0:00:05.456) 0:02:53.342 ********* 2026-04-07 03:29:39.027398 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-07 03:29:39.027406 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-07 03:29:39.027412 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-07 03:29:39.027418 | orchestrator | 2026-04-07 03:29:39.027425 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-07 03:29:39.027431 | orchestrator | Tuesday 07 April 2026 03:29:24 +0000 (0:00:01.779) 0:02:55.121 ********* 2026-04-07 03:29:39.027439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:39.027452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:39.027459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:29:39.027476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:54.904397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:54.904508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:29:54.904520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:29:54.904627 | orchestrator | 2026-04-07 03:29:54.904635 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-07 03:29:54.904642 | orchestrator | Tuesday 07 April 2026 03:29:42 +0000 (0:00:17.766) 0:03:12.887 ********* 2026-04-07 03:29:54.904649 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:29:54.904656 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:29:54.904666 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:29:54.904675 | orchestrator | 2026-04-07 03:29:54.904684 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-07 03:29:54.904735 | orchestrator | Tuesday 07 April 2026 03:29:44 +0000 (0:00:01.822) 0:03:14.710 ********* 2026-04-07 03:29:54.904744 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-07 03:29:54.904753 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-07 03:29:54.904761 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-07 03:29:54.904769 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-07 03:29:54.904778 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-07 03:29:54.904786 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-07 03:29:54.904794 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-07 03:29:54.904803 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-07 03:29:54.904811 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-07 03:29:54.904820 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-07 03:29:54.904828 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-07 03:29:54.904836 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-07 03:29:54.904844 | orchestrator | 2026-04-07 03:29:54.904854 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-07 03:29:54.904868 | orchestrator | Tuesday 07 April 2026 03:29:49 +0000 (0:00:05.274) 0:03:19.984 ********* 2026-04-07 03:29:54.904876 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-07 03:29:54.904884 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-07 03:29:54.904899 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-07 03:30:03.902275 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-07 03:30:03.902408 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-07 03:30:03.902433 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-07 03:30:03.902452 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-07 03:30:03.902483 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-07 03:30:03.902506 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-07 03:30:03.902526 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-07 03:30:03.902538 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-07 03:30:03.902549 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-07 03:30:03.902561 | orchestrator | 2026-04-07 03:30:03.902574 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-07 03:30:03.902587 | orchestrator | Tuesday 07 April 2026 03:29:54 +0000 (0:00:05.179) 0:03:25.164 ********* 2026-04-07 03:30:03.902598 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-07 03:30:03.902609 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-07 03:30:03.902620 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-07 03:30:03.902631 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-07 03:30:03.902642 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-07 03:30:03.902653 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-07 03:30:03.902664 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-07 03:30:03.902675 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-07 03:30:03.902835 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-07 03:30:03.902857 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-07 03:30:03.902875 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-07 03:30:03.902888 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-07 03:30:03.902901 | orchestrator | 2026-04-07 03:30:03.902915 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-07 03:30:03.902928 | orchestrator | Tuesday 07 April 2026 03:30:00 +0000 (0:00:05.567) 0:03:30.732 ********* 2026-04-07 03:30:03.902946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:30:03.902965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:30:03.903050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 03:30:03.903068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:30:03.903082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:30:03.903094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-07 03:30:03.903106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:30:03.903119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:30:03.903145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-07 03:30:03.903164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:31:33.033199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:31:33.033320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-07 03:31:33.033339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:31:33.033354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:31:33.033393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-07 03:31:33.033402 | orchestrator | 2026-04-07 03:31:33.033411 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-07 03:31:33.033419 | orchestrator | Tuesday 07 April 2026 03:30:04 +0000 (0:00:04.289) 0:03:35.021 ********* 2026-04-07 03:31:33.033426 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:31:33.033434 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:31:33.033440 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:31:33.033447 | orchestrator | 2026-04-07 03:31:33.033472 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-07 03:31:33.033488 | orchestrator | Tuesday 07 April 2026 03:30:05 +0000 (0:00:00.646) 0:03:35.668 ********* 2026-04-07 03:31:33.033504 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.033514 | orchestrator | 2026-04-07 03:31:33.033524 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-07 03:31:33.033535 | orchestrator | Tuesday 07 April 2026 03:30:07 +0000 (0:00:02.474) 0:03:38.142 ********* 2026-04-07 03:31:33.033545 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.033555 | orchestrator | 2026-04-07 03:31:33.033566 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-07 03:31:33.033575 | orchestrator | Tuesday 07 April 2026 03:30:10 +0000 (0:00:02.336) 0:03:40.478 ********* 2026-04-07 03:31:33.033636 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.033646 | orchestrator | 2026-04-07 03:31:33.033656 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-07 03:31:33.033669 | orchestrator | Tuesday 07 April 2026 03:30:12 +0000 (0:00:02.437) 0:03:42.916 ********* 2026-04-07 03:31:33.033701 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.033713 | orchestrator | 2026-04-07 03:31:33.033724 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-07 03:31:33.033737 | orchestrator | Tuesday 07 April 2026 03:30:15 +0000 (0:00:02.520) 0:03:45.436 ********* 2026-04-07 03:31:33.033748 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.033759 | orchestrator | 2026-04-07 03:31:33.033771 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-07 03:31:33.033784 | orchestrator | Tuesday 07 April 2026 03:30:39 +0000 (0:00:24.013) 0:04:09.449 ********* 2026-04-07 03:31:33.033795 | orchestrator | 2026-04-07 03:31:33.033807 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-07 03:31:33.033816 | orchestrator | Tuesday 07 April 2026 03:30:39 +0000 (0:00:00.071) 0:04:09.521 ********* 2026-04-07 03:31:33.033824 | orchestrator | 2026-04-07 03:31:33.033832 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-07 03:31:33.033839 | orchestrator | Tuesday 07 April 2026 03:30:39 +0000 (0:00:00.068) 0:04:09.590 ********* 2026-04-07 03:31:33.033847 | orchestrator | 2026-04-07 03:31:33.033855 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-07 03:31:33.033862 | orchestrator | Tuesday 07 April 2026 03:30:39 +0000 (0:00:00.068) 0:04:09.658 ********* 2026-04-07 03:31:33.033870 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.033878 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:31:33.033885 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:31:33.033893 | orchestrator | 2026-04-07 03:31:33.033900 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-07 03:31:33.033908 | orchestrator | Tuesday 07 April 2026 03:30:56 +0000 (0:00:16.967) 0:04:26.626 ********* 2026-04-07 03:31:33.033926 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.033935 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:31:33.033942 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:31:33.033951 | orchestrator | 2026-04-07 03:31:33.033961 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-07 03:31:33.033972 | orchestrator | Tuesday 07 April 2026 03:31:07 +0000 (0:00:11.266) 0:04:37.892 ********* 2026-04-07 03:31:33.033983 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:31:33.033995 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.034007 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:31:33.034074 | orchestrator | 2026-04-07 03:31:33.034084 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-07 03:31:33.034092 | orchestrator | Tuesday 07 April 2026 03:31:18 +0000 (0:00:10.472) 0:04:48.365 ********* 2026-04-07 03:31:33.034100 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.034108 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:31:33.034116 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:31:33.034124 | orchestrator | 2026-04-07 03:31:33.034131 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-07 03:31:33.034138 | orchestrator | Tuesday 07 April 2026 03:31:23 +0000 (0:00:05.826) 0:04:54.192 ********* 2026-04-07 03:31:33.034145 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:31:33.034151 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:31:33.034158 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:31:33.034165 | orchestrator | 2026-04-07 03:31:33.034171 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:31:33.034180 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 03:31:33.034188 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 03:31:33.034195 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 03:31:33.034202 | orchestrator | 2026-04-07 03:31:33.034208 | orchestrator | 2026-04-07 03:31:33.034215 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:31:33.034222 | orchestrator | Tuesday 07 April 2026 03:31:33 +0000 (0:00:09.081) 0:05:03.273 ********* 2026-04-07 03:31:33.034229 | orchestrator | =============================================================================== 2026-04-07 03:31:33.034236 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 24.01s 2026-04-07 03:31:33.034242 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.77s 2026-04-07 03:31:33.034250 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.50s 2026-04-07 03:31:33.034261 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.97s 2026-04-07 03:31:33.034272 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.59s 2026-04-07 03:31:33.034298 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.27s 2026-04-07 03:31:33.034311 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.47s 2026-04-07 03:31:33.034324 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.34s 2026-04-07 03:31:33.034336 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 9.08s 2026-04-07 03:31:33.034346 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.78s 2026-04-07 03:31:33.034357 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.56s 2026-04-07 03:31:33.034370 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.11s 2026-04-07 03:31:33.034382 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.87s 2026-04-07 03:31:33.034404 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.83s 2026-04-07 03:31:33.034428 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.79s 2026-04-07 03:31:33.449211 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.79s 2026-04-07 03:31:33.449311 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.57s 2026-04-07 03:31:33.449326 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.46s 2026-04-07 03:31:33.449336 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.27s 2026-04-07 03:31:33.449345 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.24s 2026-04-07 03:31:36.011020 | orchestrator | 2026-04-07 03:31:36 | INFO  | Task 3b770cbd-6f7e-475c-b384-5e219c789aa8 (ceilometer) was prepared for execution. 2026-04-07 03:31:36.011134 | orchestrator | 2026-04-07 03:31:36 | INFO  | It takes a moment until task 3b770cbd-6f7e-475c-b384-5e219c789aa8 (ceilometer) has been started and output is visible here. 2026-04-07 03:32:01.245898 | orchestrator | 2026-04-07 03:32:01.245983 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:32:01.245995 | orchestrator | 2026-04-07 03:32:01.246002 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:32:01.246009 | orchestrator | Tuesday 07 April 2026 03:31:40 +0000 (0:00:00.298) 0:00:00.298 ********* 2026-04-07 03:32:01.246052 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:32:01.246062 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:32:01.246069 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:32:01.246074 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:32:01.246077 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:32:01.246082 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:32:01.246086 | orchestrator | 2026-04-07 03:32:01.246090 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:32:01.246094 | orchestrator | Tuesday 07 April 2026 03:31:41 +0000 (0:00:00.769) 0:00:01.068 ********* 2026-04-07 03:32:01.246098 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-07 03:32:01.246102 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-07 03:32:01.246106 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-07 03:32:01.246110 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-07 03:32:01.246114 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-07 03:32:01.246117 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-07 03:32:01.246121 | orchestrator | 2026-04-07 03:32:01.246125 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-07 03:32:01.246129 | orchestrator | 2026-04-07 03:32:01.246132 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-07 03:32:01.246136 | orchestrator | Tuesday 07 April 2026 03:31:41 +0000 (0:00:00.659) 0:00:01.728 ********* 2026-04-07 03:32:01.246141 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:32:01.246146 | orchestrator | 2026-04-07 03:32:01.246150 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-04-07 03:32:01.246154 | orchestrator | Tuesday 07 April 2026 03:31:43 +0000 (0:00:01.267) 0:00:02.995 ********* 2026-04-07 03:32:01.246158 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:01.246161 | orchestrator | 2026-04-07 03:32:01.246165 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-04-07 03:32:01.246169 | orchestrator | Tuesday 07 April 2026 03:31:43 +0000 (0:00:00.154) 0:00:03.150 ********* 2026-04-07 03:32:01.246173 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:01.246176 | orchestrator | 2026-04-07 03:32:01.246180 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-04-07 03:32:01.246201 | orchestrator | Tuesday 07 April 2026 03:31:43 +0000 (0:00:00.137) 0:00:03.287 ********* 2026-04-07 03:32:01.246205 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:32:01.246209 | orchestrator | 2026-04-07 03:32:01.246212 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-04-07 03:32:01.246216 | orchestrator | Tuesday 07 April 2026 03:31:47 +0000 (0:00:04.337) 0:00:07.625 ********* 2026-04-07 03:32:01.246220 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:32:01.246224 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-04-07 03:32:01.246227 | orchestrator | 2026-04-07 03:32:01.246231 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-04-07 03:32:01.246235 | orchestrator | Tuesday 07 April 2026 03:31:51 +0000 (0:00:04.006) 0:00:11.631 ********* 2026-04-07 03:32:01.246238 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:32:01.246242 | orchestrator | 2026-04-07 03:32:01.246246 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-04-07 03:32:01.246259 | orchestrator | Tuesday 07 April 2026 03:31:55 +0000 (0:00:03.461) 0:00:15.093 ********* 2026-04-07 03:32:01.246263 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-04-07 03:32:01.246266 | orchestrator | 2026-04-07 03:32:01.246270 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-04-07 03:32:01.246274 | orchestrator | Tuesday 07 April 2026 03:31:59 +0000 (0:00:04.264) 0:00:19.358 ********* 2026-04-07 03:32:01.246277 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:01.246281 | orchestrator | 2026-04-07 03:32:01.246285 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-07 03:32:01.246289 | orchestrator | Tuesday 07 April 2026 03:31:59 +0000 (0:00:00.137) 0:00:19.495 ********* 2026-04-07 03:32:01.246295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:01.246314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:01.246319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:01.246324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:01.246333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:01.246338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:01.246342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:01.246350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:06.378966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:06.379047 | orchestrator | 2026-04-07 03:32:06.379056 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-07 03:32:06.379079 | orchestrator | Tuesday 07 April 2026 03:32:01 +0000 (0:00:01.536) 0:00:21.032 ********* 2026-04-07 03:32:06.379085 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 03:32:06.379091 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:32:06.379095 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 03:32:06.379100 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:32:06.379104 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 03:32:06.379109 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 03:32:06.379113 | orchestrator | 2026-04-07 03:32:06.379118 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-07 03:32:06.379124 | orchestrator | Tuesday 07 April 2026 03:32:02 +0000 (0:00:01.756) 0:00:22.789 ********* 2026-04-07 03:32:06.379128 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:32:06.379134 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:32:06.379138 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:32:06.379143 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:32:06.379147 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:32:06.379152 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:32:06.379156 | orchestrator | 2026-04-07 03:32:06.379161 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-07 03:32:06.379165 | orchestrator | Tuesday 07 April 2026 03:32:03 +0000 (0:00:00.671) 0:00:23.460 ********* 2026-04-07 03:32:06.379170 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:06.379175 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:06.379179 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:06.379184 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:06.379188 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:06.379193 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:06.379198 | orchestrator | 2026-04-07 03:32:06.379206 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-07 03:32:06.379213 | orchestrator | Tuesday 07 April 2026 03:32:04 +0000 (0:00:00.865) 0:00:24.326 ********* 2026-04-07 03:32:06.379221 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:32:06.379228 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:32:06.379235 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:32:06.379243 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:32:06.379250 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:32:06.379290 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:32:06.379298 | orchestrator | 2026-04-07 03:32:06.379305 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-07 03:32:06.379312 | orchestrator | Tuesday 07 April 2026 03:32:05 +0000 (0:00:00.671) 0:00:24.998 ********* 2026-04-07 03:32:06.379324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:06.379334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:06.379348 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:06.379371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:06.379378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:06.379386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:06.379394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:06.379406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:06.379414 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:06.379421 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:06.379428 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:06.379436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:06.379448 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:06.379461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:11.708091 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:11.708190 | orchestrator | 2026-04-07 03:32:11.708206 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-07 03:32:11.708218 | orchestrator | Tuesday 07 April 2026 03:32:06 +0000 (0:00:01.165) 0:00:26.163 ********* 2026-04-07 03:32:11.708230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:11.708244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:11.708271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:11.708282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:11.708314 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:11.708324 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:11.708334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:11.708344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:11.708370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:11.708380 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:11.708390 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:11.708400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:11.708410 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:11.708424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:11.708434 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:11.708443 | orchestrator | 2026-04-07 03:32:11.708453 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-07 03:32:11.708473 | orchestrator | Tuesday 07 April 2026 03:32:07 +0000 (0:00:00.939) 0:00:27.103 ********* 2026-04-07 03:32:11.708482 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:32:11.708491 | orchestrator | 2026-04-07 03:32:11.708499 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-07 03:32:11.708508 | orchestrator | Tuesday 07 April 2026 03:32:08 +0000 (0:00:00.756) 0:00:27.859 ********* 2026-04-07 03:32:11.708518 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:32:11.708528 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:32:11.708560 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:32:11.708570 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:32:11.708579 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:32:11.708588 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:32:11.708597 | orchestrator | 2026-04-07 03:32:11.708606 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-07 03:32:11.708616 | orchestrator | Tuesday 07 April 2026 03:32:08 +0000 (0:00:00.875) 0:00:28.735 ********* 2026-04-07 03:32:11.708625 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:32:11.708635 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:32:11.708645 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:32:11.708656 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:32:11.708666 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:32:11.708676 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:32:11.708686 | orchestrator | 2026-04-07 03:32:11.708696 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-07 03:32:11.708707 | orchestrator | Tuesday 07 April 2026 03:32:09 +0000 (0:00:01.029) 0:00:29.764 ********* 2026-04-07 03:32:11.708717 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:11.708728 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:11.708738 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:11.708748 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:11.708758 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:11.708768 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:11.708778 | orchestrator | 2026-04-07 03:32:11.708788 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-07 03:32:11.708799 | orchestrator | Tuesday 07 April 2026 03:32:11 +0000 (0:00:01.065) 0:00:30.830 ********* 2026-04-07 03:32:11.708809 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:11.708819 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:11.708831 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:11.708841 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:11.708851 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:11.708861 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:11.708870 | orchestrator | 2026-04-07 03:32:17.193645 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-07 03:32:17.193747 | orchestrator | Tuesday 07 April 2026 03:32:11 +0000 (0:00:00.666) 0:00:31.497 ********* 2026-04-07 03:32:17.193761 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:32:17.193773 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 03:32:17.193783 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 03:32:17.193793 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:32:17.193803 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 03:32:17.193813 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 03:32:17.193822 | orchestrator | 2026-04-07 03:32:17.193833 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-07 03:32:17.193843 | orchestrator | Tuesday 07 April 2026 03:32:13 +0000 (0:00:01.652) 0:00:33.150 ********* 2026-04-07 03:32:17.193856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:17.193894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:17.193913 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:17.193952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:17.193979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:17.193995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:17.194103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:17.194126 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:17.194145 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:17.194164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:17.194232 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:17.194246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:17.194259 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:17.194278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:17.194291 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:17.194302 | orchestrator | 2026-04-07 03:32:17.194313 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-07 03:32:17.194325 | orchestrator | Tuesday 07 April 2026 03:32:14 +0000 (0:00:00.837) 0:00:33.987 ********* 2026-04-07 03:32:17.194336 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:17.194347 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:17.194359 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:17.194371 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:17.194382 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:17.194393 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:17.194404 | orchestrator | 2026-04-07 03:32:17.194416 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-07 03:32:17.194426 | orchestrator | Tuesday 07 April 2026 03:32:15 +0000 (0:00:00.906) 0:00:34.894 ********* 2026-04-07 03:32:17.194438 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:32:17.194450 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 03:32:17.194461 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:32:17.194472 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 03:32:17.194483 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 03:32:17.194494 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 03:32:17.194505 | orchestrator | 2026-04-07 03:32:17.194515 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-07 03:32:17.194524 | orchestrator | Tuesday 07 April 2026 03:32:16 +0000 (0:00:01.585) 0:00:36.480 ********* 2026-04-07 03:32:17.194622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:23.530157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:23.530257 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:23.530271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:23.530306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:23.530320 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:23.530334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:23.530348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:23.530362 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:23.530375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:23.530412 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:23.530477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:23.530493 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:23.530507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:23.530520 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:23.530561 | orchestrator | 2026-04-07 03:32:23.530577 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-07 03:32:23.530591 | orchestrator | Tuesday 07 April 2026 03:32:17 +0000 (0:00:01.278) 0:00:37.758 ********* 2026-04-07 03:32:23.530603 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:23.530612 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:23.530620 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:23.530628 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:23.530636 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:23.530650 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:23.530660 | orchestrator | 2026-04-07 03:32:23.530669 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-07 03:32:23.530678 | orchestrator | Tuesday 07 April 2026 03:32:18 +0000 (0:00:00.944) 0:00:38.703 ********* 2026-04-07 03:32:23.530687 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:23.530696 | orchestrator | 2026-04-07 03:32:23.530705 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-07 03:32:23.530715 | orchestrator | Tuesday 07 April 2026 03:32:19 +0000 (0:00:00.147) 0:00:38.850 ********* 2026-04-07 03:32:23.530723 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:23.530733 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:23.530742 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:23.530751 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:23.530760 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:23.530769 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:23.530778 | orchestrator | 2026-04-07 03:32:23.530787 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-07 03:32:23.530796 | orchestrator | Tuesday 07 April 2026 03:32:19 +0000 (0:00:00.680) 0:00:39.531 ********* 2026-04-07 03:32:23.530815 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:32:23.530826 | orchestrator | 2026-04-07 03:32:23.530834 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-07 03:32:23.530844 | orchestrator | Tuesday 07 April 2026 03:32:21 +0000 (0:00:01.398) 0:00:40.929 ********* 2026-04-07 03:32:23.530853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:23.530870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:24.073776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:24.073897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:24.073935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:24.073946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:24.073976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:24.073985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:24.074011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:24.074073 | orchestrator | 2026-04-07 03:32:24.074084 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-07 03:32:24.074094 | orchestrator | Tuesday 07 April 2026 03:32:23 +0000 (0:00:02.388) 0:00:43.318 ********* 2026-04-07 03:32:24.074103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:24.074117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:24.074133 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:24.074147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:24.074162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:24.074184 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:24.074200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:24.074224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:26.103685 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:26.103778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:26.103796 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:26.103823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:26.103850 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:26.103856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:26.103862 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:26.103868 | orchestrator | 2026-04-07 03:32:26.103874 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-07 03:32:26.103881 | orchestrator | Tuesday 07 April 2026 03:32:24 +0000 (0:00:00.897) 0:00:44.216 ********* 2026-04-07 03:32:26.103887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:26.103895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:26.103915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:26.103921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:26.103935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:26.103941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:26.103946 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:26.103952 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:26.103957 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:26.103963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:26.103969 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:26.103974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:26.103980 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:26.103992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:34.014229 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:34.014319 | orchestrator | 2026-04-07 03:32:34.014348 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-07 03:32:34.014356 | orchestrator | Tuesday 07 April 2026 03:32:26 +0000 (0:00:01.671) 0:00:45.887 ********* 2026-04-07 03:32:34.014376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:34.014387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:34.014394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:34.014401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:34.014409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:34.014431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:34.014449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:34.014457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:34.014463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:34.014470 | orchestrator | 2026-04-07 03:32:34.014476 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-07 03:32:34.014482 | orchestrator | Tuesday 07 April 2026 03:32:28 +0000 (0:00:02.725) 0:00:48.613 ********* 2026-04-07 03:32:34.014489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:34.014496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:34.014507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:43.734375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:43.734470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:43.734482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:43.734491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:43.734501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:43.734561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:43.734595 | orchestrator | 2026-04-07 03:32:43.734609 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-07 03:32:43.734641 | orchestrator | Tuesday 07 April 2026 03:32:34 +0000 (0:00:05.188) 0:00:53.802 ********* 2026-04-07 03:32:43.734656 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:32:43.734672 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 03:32:43.734683 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 03:32:43.734696 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:32:43.734710 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 03:32:43.734723 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 03:32:43.734737 | orchestrator | 2026-04-07 03:32:43.734747 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-07 03:32:43.734754 | orchestrator | Tuesday 07 April 2026 03:32:35 +0000 (0:00:01.664) 0:00:55.466 ********* 2026-04-07 03:32:43.734762 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:43.734769 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:43.734776 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:43.734783 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:43.734797 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:43.734807 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:43.734820 | orchestrator | 2026-04-07 03:32:43.734831 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-07 03:32:43.734844 | orchestrator | Tuesday 07 April 2026 03:32:36 +0000 (0:00:00.647) 0:00:56.114 ********* 2026-04-07 03:32:43.734856 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:43.734867 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:43.734879 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:43.734892 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:32:43.734906 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:32:43.734918 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:32:43.734931 | orchestrator | 2026-04-07 03:32:43.734940 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-07 03:32:43.734949 | orchestrator | Tuesday 07 April 2026 03:32:38 +0000 (0:00:01.703) 0:00:57.818 ********* 2026-04-07 03:32:43.734957 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:43.734966 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:43.734974 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:43.734986 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:32:43.734999 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:32:43.735011 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:32:43.735024 | orchestrator | 2026-04-07 03:32:43.735038 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-07 03:32:43.735051 | orchestrator | Tuesday 07 April 2026 03:32:39 +0000 (0:00:01.424) 0:00:59.243 ********* 2026-04-07 03:32:43.735064 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:32:43.735073 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 03:32:43.735085 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 03:32:43.735097 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:32:43.735112 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 03:32:43.735124 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 03:32:43.735136 | orchestrator | 2026-04-07 03:32:43.735150 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-07 03:32:43.735162 | orchestrator | Tuesday 07 April 2026 03:32:41 +0000 (0:00:01.737) 0:01:00.981 ********* 2026-04-07 03:32:43.735187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:43.735197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:43.735205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:43.735228 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:44.647263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:44.647339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:32:44.647364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:44.647370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:44.647376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:32:44.647381 | orchestrator | 2026-04-07 03:32:44.647387 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-07 03:32:44.647393 | orchestrator | Tuesday 07 April 2026 03:32:43 +0000 (0:00:02.541) 0:01:03.522 ********* 2026-04-07 03:32:44.647398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:44.647424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:44.647431 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:44.647436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:44.647445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:44.647450 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:44.647455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:44.647469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:44.647474 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:44.647480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:44.647490 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:44.647578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:48.725392 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:48.725555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:48.725575 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:48.725582 | orchestrator | 2026-04-07 03:32:48.725590 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-07 03:32:48.725631 | orchestrator | Tuesday 07 April 2026 03:32:44 +0000 (0:00:00.915) 0:01:04.437 ********* 2026-04-07 03:32:48.725639 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:48.725646 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:48.725653 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:48.725660 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:48.725666 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:48.725673 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:48.725679 | orchestrator | 2026-04-07 03:32:48.725686 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-07 03:32:48.725693 | orchestrator | Tuesday 07 April 2026 03:32:45 +0000 (0:00:00.939) 0:01:05.377 ********* 2026-04-07 03:32:48.725703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:48.725713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:48.725720 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:32:48.725727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:48.725749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:48.725773 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:32:48.725797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-07 03:32:48.725804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 03:32:48.725809 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:32:48.725815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:48.725821 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:32:48.725828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:48.725833 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:32:48.725840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-07 03:32:48.725851 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:32:48.725857 | orchestrator | 2026-04-07 03:32:48.725867 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-04-07 03:32:48.725873 | orchestrator | Tuesday 07 April 2026 03:32:46 +0000 (0:00:01.030) 0:01:06.408 ********* 2026-04-07 03:32:48.725886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:20.017710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:20.017831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:20.017848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:20.017860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:20.017890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:20.017927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:33:20.017958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:33:20.017969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-07 03:33:20.017981 | orchestrator | 2026-04-07 03:33:20.017993 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-07 03:33:20.018006 | orchestrator | Tuesday 07 April 2026 03:32:48 +0000 (0:00:02.101) 0:01:08.509 ********* 2026-04-07 03:33:20.018079 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:33:20.018094 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:33:20.018104 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:33:20.018114 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:33:20.018123 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:33:20.018133 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:33:20.018143 | orchestrator | 2026-04-07 03:33:20.018153 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-07 03:33:20.018163 | orchestrator | Tuesday 07 April 2026 03:32:49 +0000 (0:00:00.696) 0:01:09.206 ********* 2026-04-07 03:33:20.018173 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:33:20.018182 | orchestrator | 2026-04-07 03:33:20.018194 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-07 03:33:20.018209 | orchestrator | Tuesday 07 April 2026 03:32:54 +0000 (0:00:05.195) 0:01:14.402 ********* 2026-04-07 03:33:20.018221 | orchestrator | 2026-04-07 03:33:20.018238 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-07 03:33:20.018251 | orchestrator | Tuesday 07 April 2026 03:32:54 +0000 (0:00:00.077) 0:01:14.479 ********* 2026-04-07 03:33:20.018263 | orchestrator | 2026-04-07 03:33:20.018302 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-07 03:33:20.018335 | orchestrator | Tuesday 07 April 2026 03:32:54 +0000 (0:00:00.073) 0:01:14.553 ********* 2026-04-07 03:33:20.018362 | orchestrator | 2026-04-07 03:33:20.018428 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-07 03:33:20.018457 | orchestrator | Tuesday 07 April 2026 03:32:55 +0000 (0:00:00.272) 0:01:14.825 ********* 2026-04-07 03:33:20.018512 | orchestrator | 2026-04-07 03:33:20.018525 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-07 03:33:20.018535 | orchestrator | Tuesday 07 April 2026 03:32:55 +0000 (0:00:00.099) 0:01:14.925 ********* 2026-04-07 03:33:20.018545 | orchestrator | 2026-04-07 03:33:20.018557 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-07 03:33:20.018580 | orchestrator | Tuesday 07 April 2026 03:32:55 +0000 (0:00:00.071) 0:01:14.996 ********* 2026-04-07 03:33:20.018601 | orchestrator | 2026-04-07 03:33:20.018615 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-07 03:33:20.018625 | orchestrator | Tuesday 07 April 2026 03:32:55 +0000 (0:00:00.094) 0:01:15.091 ********* 2026-04-07 03:33:20.018636 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:33:20.018646 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:33:20.018656 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:33:20.018665 | orchestrator | 2026-04-07 03:33:20.018674 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-07 03:33:20.018685 | orchestrator | Tuesday 07 April 2026 03:33:05 +0000 (0:00:10.332) 0:01:25.423 ********* 2026-04-07 03:33:20.018694 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:33:20.018704 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:33:20.018727 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:33:20.018737 | orchestrator | 2026-04-07 03:33:20.018747 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-07 03:33:20.018756 | orchestrator | Tuesday 07 April 2026 03:33:13 +0000 (0:00:07.750) 0:01:33.174 ********* 2026-04-07 03:33:20.018764 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:33:20.018773 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:33:20.018782 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:33:20.018790 | orchestrator | 2026-04-07 03:33:20.018798 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:33:20.018814 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-07 03:33:20.018834 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 03:33:20.018860 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 03:33:20.564624 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-07 03:33:20.564713 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-07 03:33:20.564728 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-07 03:33:20.564741 | orchestrator | 2026-04-07 03:33:20.564753 | orchestrator | 2026-04-07 03:33:20.564764 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:33:20.564777 | orchestrator | Tuesday 07 April 2026 03:33:19 +0000 (0:00:06.614) 0:01:39.789 ********* 2026-04-07 03:33:20.564787 | orchestrator | =============================================================================== 2026-04-07 03:33:20.564798 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.33s 2026-04-07 03:33:20.564833 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 7.75s 2026-04-07 03:33:20.564841 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.61s 2026-04-07 03:33:20.564848 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 5.20s 2026-04-07 03:33:20.564855 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.19s 2026-04-07 03:33:20.564862 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 4.34s 2026-04-07 03:33:20.564869 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.26s 2026-04-07 03:33:20.564875 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 4.01s 2026-04-07 03:33:20.564882 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.46s 2026-04-07 03:33:20.564888 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.73s 2026-04-07 03:33:20.564895 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.54s 2026-04-07 03:33:20.564902 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.39s 2026-04-07 03:33:20.564908 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 2.10s 2026-04-07 03:33:20.564915 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.76s 2026-04-07 03:33:20.564922 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.74s 2026-04-07 03:33:20.564928 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.70s 2026-04-07 03:33:20.564936 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.67s 2026-04-07 03:33:20.564943 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.66s 2026-04-07 03:33:20.564950 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.65s 2026-04-07 03:33:20.564956 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.59s 2026-04-07 03:33:23.300979 | orchestrator | 2026-04-07 03:33:23 | INFO  | Task c8595065-412d-46ff-9061-b787caaa67e8 (aodh) was prepared for execution. 2026-04-07 03:33:23.301078 | orchestrator | 2026-04-07 03:33:23 | INFO  | It takes a moment until task c8595065-412d-46ff-9061-b787caaa67e8 (aodh) has been started and output is visible here. 2026-04-07 03:33:57.304224 | orchestrator | 2026-04-07 03:33:57.304338 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:33:57.304353 | orchestrator | 2026-04-07 03:33:57.304364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:33:57.304374 | orchestrator | Tuesday 07 April 2026 03:33:28 +0000 (0:00:00.282) 0:00:00.282 ********* 2026-04-07 03:33:57.304384 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:33:57.304395 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:33:57.304405 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:33:57.304415 | orchestrator | 2026-04-07 03:33:57.304425 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:33:57.304488 | orchestrator | Tuesday 07 April 2026 03:33:28 +0000 (0:00:00.367) 0:00:00.650 ********* 2026-04-07 03:33:57.304499 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-07 03:33:57.304523 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-07 03:33:57.304534 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-07 03:33:57.304544 | orchestrator | 2026-04-07 03:33:57.304554 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-07 03:33:57.304563 | orchestrator | 2026-04-07 03:33:57.304573 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-07 03:33:57.304583 | orchestrator | Tuesday 07 April 2026 03:33:28 +0000 (0:00:00.494) 0:00:01.144 ********* 2026-04-07 03:33:57.304593 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:33:57.304604 | orchestrator | 2026-04-07 03:33:57.304614 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-04-07 03:33:57.304643 | orchestrator | Tuesday 07 April 2026 03:33:29 +0000 (0:00:00.591) 0:00:01.735 ********* 2026-04-07 03:33:57.304655 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-04-07 03:33:57.304672 | orchestrator | 2026-04-07 03:33:57.304689 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-04-07 03:33:57.304705 | orchestrator | Tuesday 07 April 2026 03:33:33 +0000 (0:00:03.660) 0:00:05.396 ********* 2026-04-07 03:33:57.304722 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-04-07 03:33:57.304738 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-04-07 03:33:57.304754 | orchestrator | 2026-04-07 03:33:57.304771 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-04-07 03:33:57.304783 | orchestrator | Tuesday 07 April 2026 03:33:39 +0000 (0:00:06.829) 0:00:12.226 ********* 2026-04-07 03:33:57.304792 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:33:57.304803 | orchestrator | 2026-04-07 03:33:57.304812 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-04-07 03:33:57.304822 | orchestrator | Tuesday 07 April 2026 03:33:43 +0000 (0:00:03.503) 0:00:15.729 ********* 2026-04-07 03:33:57.304835 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:33:57.304851 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-04-07 03:33:57.304873 | orchestrator | 2026-04-07 03:33:57.304893 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-04-07 03:33:57.304909 | orchestrator | Tuesday 07 April 2026 03:33:47 +0000 (0:00:04.049) 0:00:19.779 ********* 2026-04-07 03:33:57.304924 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:33:57.304940 | orchestrator | 2026-04-07 03:33:57.304954 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-04-07 03:33:57.304970 | orchestrator | Tuesday 07 April 2026 03:33:50 +0000 (0:00:03.394) 0:00:23.173 ********* 2026-04-07 03:33:57.304985 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-04-07 03:33:57.305000 | orchestrator | 2026-04-07 03:33:57.305014 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-07 03:33:57.305029 | orchestrator | Tuesday 07 April 2026 03:33:55 +0000 (0:00:04.092) 0:00:27.266 ********* 2026-04-07 03:33:57.305050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:33:57.305099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:33:57.305145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:33:57.305165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:33:57.305184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:33:57.305201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:33:57.305219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:57.305249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:58.700426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:58.700525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:58.700533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:58.700539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:33:58.700545 | orchestrator | 2026-04-07 03:33:58.700551 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-07 03:33:58.700557 | orchestrator | Tuesday 07 April 2026 03:33:57 +0000 (0:00:02.282) 0:00:29.549 ********* 2026-04-07 03:33:58.700562 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:33:58.700568 | orchestrator | 2026-04-07 03:33:58.700573 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-07 03:33:58.700577 | orchestrator | Tuesday 07 April 2026 03:33:57 +0000 (0:00:00.151) 0:00:29.701 ********* 2026-04-07 03:33:58.700582 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:33:58.700587 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:33:58.700592 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:33:58.700596 | orchestrator | 2026-04-07 03:33:58.700601 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-07 03:33:58.700606 | orchestrator | Tuesday 07 April 2026 03:33:57 +0000 (0:00:00.544) 0:00:30.245 ********* 2026-04-07 03:33:58.700612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:33:58.700643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:33:58.700653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:33:58.700659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:33:58.700664 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:33:58.700669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:33:58.700674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:33:58.700679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:33:58.700695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:34:03.879807 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:34:03.879919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:34:03.879937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:34:03.879948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:34:03.879957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:34:03.879966 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:34:03.879976 | orchestrator | 2026-04-07 03:34:03.879986 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-07 03:34:03.880001 | orchestrator | Tuesday 07 April 2026 03:33:58 +0000 (0:00:00.708) 0:00:30.953 ********* 2026-04-07 03:34:03.880040 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:34:03.880058 | orchestrator | 2026-04-07 03:34:03.880074 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-07 03:34:03.880090 | orchestrator | Tuesday 07 April 2026 03:33:59 +0000 (0:00:00.833) 0:00:31.786 ********* 2026-04-07 03:34:03.880106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:03.880140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:03.880151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:03.880160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:03.880169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:03.880186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:03.880195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:03.880216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:04.639724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:04.639810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:04.639821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:04.639827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:04.639851 | orchestrator | 2026-04-07 03:34:04.639859 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-07 03:34:04.639866 | orchestrator | Tuesday 07 April 2026 03:34:03 +0000 (0:00:04.343) 0:00:36.129 ********* 2026-04-07 03:34:04.639874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:34:04.639892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:34:04.639911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:34:04.639918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:34:04.639924 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:34:04.639930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:34:04.639941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:34:04.639946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:34:04.639952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:34:04.639958 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:34:04.639972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:34:05.759968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:34:05.760094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:34:05.760153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:34:05.760173 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:34:05.760197 | orchestrator | 2026-04-07 03:34:05.760211 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-07 03:34:05.760224 | orchestrator | Tuesday 07 April 2026 03:34:04 +0000 (0:00:00.765) 0:00:36.894 ********* 2026-04-07 03:34:05.760237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:34:05.760263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:34:05.760275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:34:05.760306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:34:05.760318 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:34:05.760339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:34:05.760350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:34:05.760361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:34:05.760373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:34:05.760384 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:34:05.760408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-07 03:34:09.932633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 03:34:09.932755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 03:34:09.932768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 03:34:09.932779 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:34:09.932789 | orchestrator | 2026-04-07 03:34:09.932798 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-07 03:34:09.932808 | orchestrator | Tuesday 07 April 2026 03:34:05 +0000 (0:00:01.116) 0:00:38.011 ********* 2026-04-07 03:34:09.932817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:09.932841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:09.932865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:09.932881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:09.932890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:09.932900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:09.932914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:09.932942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:09.932958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:09.932992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076286 | orchestrator | 2026-04-07 03:34:19.076296 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-07 03:34:19.076305 | orchestrator | Tuesday 07 April 2026 03:34:09 +0000 (0:00:04.166) 0:00:42.178 ********* 2026-04-07 03:34:19.076315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:19.076338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:19.076346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:19.076388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:19.076498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422331 | orchestrator | 2026-04-07 03:34:24.422348 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-07 03:34:24.422361 | orchestrator | Tuesday 07 April 2026 03:34:19 +0000 (0:00:09.152) 0:00:51.330 ********* 2026-04-07 03:34:24.422371 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:34:24.422383 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:34:24.422392 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:34:24.422402 | orchestrator | 2026-04-07 03:34:24.422482 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-04-07 03:34:24.422500 | orchestrator | Tuesday 07 April 2026 03:34:20 +0000 (0:00:01.807) 0:00:53.138 ********* 2026-04-07 03:34:24.422518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:24.422555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:24.422587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-07 03:34:24.422617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:34:24.422718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:35:21.527445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-07 03:35:21.528226 | orchestrator | 2026-04-07 03:35:21.528259 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-07 03:35:21.528271 | orchestrator | Tuesday 07 April 2026 03:34:24 +0000 (0:00:03.534) 0:00:56.672 ********* 2026-04-07 03:35:21.528281 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:35:21.528291 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:35:21.528300 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:35:21.528309 | orchestrator | 2026-04-07 03:35:21.528318 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-04-07 03:35:21.528327 | orchestrator | Tuesday 07 April 2026 03:34:24 +0000 (0:00:00.316) 0:00:56.989 ********* 2026-04-07 03:35:21.528336 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:35:21.528345 | orchestrator | 2026-04-07 03:35:21.528389 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-04-07 03:35:21.528398 | orchestrator | Tuesday 07 April 2026 03:34:27 +0000 (0:00:02.499) 0:00:59.488 ********* 2026-04-07 03:35:21.528407 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:35:21.528440 | orchestrator | 2026-04-07 03:35:21.528450 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-07 03:35:21.528459 | orchestrator | Tuesday 07 April 2026 03:34:29 +0000 (0:00:02.523) 0:01:02.011 ********* 2026-04-07 03:35:21.528468 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:35:21.528476 | orchestrator | 2026-04-07 03:35:21.528485 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-07 03:35:21.528494 | orchestrator | Tuesday 07 April 2026 03:34:43 +0000 (0:00:13.692) 0:01:15.704 ********* 2026-04-07 03:35:21.528502 | orchestrator | 2026-04-07 03:35:21.528511 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-07 03:35:21.528519 | orchestrator | Tuesday 07 April 2026 03:34:43 +0000 (0:00:00.082) 0:01:15.786 ********* 2026-04-07 03:35:21.528528 | orchestrator | 2026-04-07 03:35:21.528537 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-07 03:35:21.528545 | orchestrator | Tuesday 07 April 2026 03:34:43 +0000 (0:00:00.072) 0:01:15.859 ********* 2026-04-07 03:35:21.528554 | orchestrator | 2026-04-07 03:35:21.528563 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-07 03:35:21.528571 | orchestrator | Tuesday 07 April 2026 03:34:43 +0000 (0:00:00.281) 0:01:16.141 ********* 2026-04-07 03:35:21.528581 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:35:21.528602 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:35:21.528611 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:35:21.528620 | orchestrator | 2026-04-07 03:35:21.528629 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-07 03:35:21.528637 | orchestrator | Tuesday 07 April 2026 03:34:54 +0000 (0:00:10.634) 0:01:26.776 ********* 2026-04-07 03:35:21.528646 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:35:21.528654 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:35:21.528663 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:35:21.528672 | orchestrator | 2026-04-07 03:35:21.528680 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-07 03:35:21.528689 | orchestrator | Tuesday 07 April 2026 03:35:04 +0000 (0:00:10.335) 0:01:37.111 ********* 2026-04-07 03:35:21.528697 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:35:21.528706 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:35:21.528715 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:35:21.528723 | orchestrator | 2026-04-07 03:35:21.528732 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-07 03:35:21.528740 | orchestrator | Tuesday 07 April 2026 03:35:15 +0000 (0:00:10.446) 0:01:47.558 ********* 2026-04-07 03:35:21.528749 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:35:21.528758 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:35:21.528766 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:35:21.528775 | orchestrator | 2026-04-07 03:35:21.528783 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:35:21.528794 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 03:35:21.528804 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 03:35:21.528813 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 03:35:21.528822 | orchestrator | 2026-04-07 03:35:21.528830 | orchestrator | 2026-04-07 03:35:21.528839 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:35:21.528848 | orchestrator | Tuesday 07 April 2026 03:35:21 +0000 (0:00:05.790) 0:01:53.348 ********* 2026-04-07 03:35:21.528857 | orchestrator | =============================================================================== 2026-04-07 03:35:21.528865 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.69s 2026-04-07 03:35:21.528874 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.63s 2026-04-07 03:35:21.528907 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.45s 2026-04-07 03:35:21.528917 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.34s 2026-04-07 03:35:21.528926 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.15s 2026-04-07 03:35:21.528935 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.83s 2026-04-07 03:35:21.528943 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.79s 2026-04-07 03:35:21.528952 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.34s 2026-04-07 03:35:21.528960 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.17s 2026-04-07 03:35:21.528969 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 4.09s 2026-04-07 03:35:21.528978 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.05s 2026-04-07 03:35:21.528986 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.66s 2026-04-07 03:35:21.528995 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.53s 2026-04-07 03:35:21.529003 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.50s 2026-04-07 03:35:21.529012 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.39s 2026-04-07 03:35:21.529020 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.52s 2026-04-07 03:35:21.529029 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.50s 2026-04-07 03:35:21.529038 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.28s 2026-04-07 03:35:21.529046 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.81s 2026-04-07 03:35:21.529055 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.12s 2026-04-07 03:35:24.204007 | orchestrator | 2026-04-07 03:35:24 | INFO  | Task 7be60074-46df-4dce-bd4c-b2af152cddea (kolla-ceph-rgw) was prepared for execution. 2026-04-07 03:35:24.204096 | orchestrator | 2026-04-07 03:35:24 | INFO  | It takes a moment until task 7be60074-46df-4dce-bd4c-b2af152cddea (kolla-ceph-rgw) has been started and output is visible here. 2026-04-07 03:36:02.590969 | orchestrator | 2026-04-07 03:36:02.591081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:36:02.591097 | orchestrator | 2026-04-07 03:36:02.591109 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:36:02.591121 | orchestrator | Tuesday 07 April 2026 03:35:29 +0000 (0:00:00.313) 0:00:00.313 ********* 2026-04-07 03:36:02.591132 | orchestrator | ok: [testbed-manager] 2026-04-07 03:36:02.591144 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:36:02.591155 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:36:02.591166 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:36:02.591177 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:36:02.591187 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:36:02.591214 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:36:02.591226 | orchestrator | 2026-04-07 03:36:02.591237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:36:02.591248 | orchestrator | Tuesday 07 April 2026 03:35:30 +0000 (0:00:00.963) 0:00:01.277 ********* 2026-04-07 03:36:02.591259 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-07 03:36:02.591271 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-07 03:36:02.591282 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-07 03:36:02.591293 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-07 03:36:02.591304 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-07 03:36:02.591350 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-07 03:36:02.591368 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-07 03:36:02.591437 | orchestrator | 2026-04-07 03:36:02.591460 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-07 03:36:02.591487 | orchestrator | 2026-04-07 03:36:02.591498 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-07 03:36:02.591510 | orchestrator | Tuesday 07 April 2026 03:35:31 +0000 (0:00:00.830) 0:00:02.108 ********* 2026-04-07 03:36:02.591522 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:36:02.591540 | orchestrator | 2026-04-07 03:36:02.591559 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-07 03:36:02.591577 | orchestrator | Tuesday 07 April 2026 03:35:32 +0000 (0:00:01.674) 0:00:03.783 ********* 2026-04-07 03:36:02.591595 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-07 03:36:02.591613 | orchestrator | 2026-04-07 03:36:02.591630 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-07 03:36:02.591647 | orchestrator | Tuesday 07 April 2026 03:35:36 +0000 (0:00:03.959) 0:00:07.742 ********* 2026-04-07 03:36:02.591665 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-07 03:36:02.591683 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-07 03:36:02.591702 | orchestrator | 2026-04-07 03:36:02.591721 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-07 03:36:02.591742 | orchestrator | Tuesday 07 April 2026 03:35:43 +0000 (0:00:06.569) 0:00:14.312 ********* 2026-04-07 03:36:02.591760 | orchestrator | ok: [testbed-manager] => (item=service) 2026-04-07 03:36:02.591780 | orchestrator | 2026-04-07 03:36:02.591794 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-07 03:36:02.591805 | orchestrator | Tuesday 07 April 2026 03:35:46 +0000 (0:00:03.338) 0:00:17.650 ********* 2026-04-07 03:36:02.591815 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:36:02.591827 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-07 03:36:02.591838 | orchestrator | 2026-04-07 03:36:02.591849 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-07 03:36:02.591860 | orchestrator | Tuesday 07 April 2026 03:35:50 +0000 (0:00:03.853) 0:00:21.504 ********* 2026-04-07 03:36:02.591871 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-07 03:36:02.591882 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-07 03:36:02.591893 | orchestrator | 2026-04-07 03:36:02.591903 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-07 03:36:02.591914 | orchestrator | Tuesday 07 April 2026 03:35:56 +0000 (0:00:06.344) 0:00:27.849 ********* 2026-04-07 03:36:02.591925 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-07 03:36:02.591936 | orchestrator | 2026-04-07 03:36:02.591947 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:36:02.591959 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:02.591970 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:02.591982 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:02.591993 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:02.592004 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:02.592047 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:02.592059 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:02.592069 | orchestrator | 2026-04-07 03:36:02.592080 | orchestrator | 2026-04-07 03:36:02.592091 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:36:02.592103 | orchestrator | Tuesday 07 April 2026 03:36:02 +0000 (0:00:05.093) 0:00:32.943 ********* 2026-04-07 03:36:02.592114 | orchestrator | =============================================================================== 2026-04-07 03:36:02.592133 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.57s 2026-04-07 03:36:02.592144 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.35s 2026-04-07 03:36:02.592155 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.09s 2026-04-07 03:36:02.592166 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.96s 2026-04-07 03:36:02.592176 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.85s 2026-04-07 03:36:02.592187 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.34s 2026-04-07 03:36:02.592198 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.67s 2026-04-07 03:36:02.592209 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.96s 2026-04-07 03:36:02.592219 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-04-07 03:36:05.256225 | orchestrator | 2026-04-07 03:36:05 | INFO  | Task 895e256d-7d02-4aea-8216-b2b0c100cc1a (gnocchi) was prepared for execution. 2026-04-07 03:36:05.256436 | orchestrator | 2026-04-07 03:36:05 | INFO  | It takes a moment until task 895e256d-7d02-4aea-8216-b2b0c100cc1a (gnocchi) has been started and output is visible here. 2026-04-07 03:36:10.945906 | orchestrator | 2026-04-07 03:36:10.946108 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:36:10.946131 | orchestrator | 2026-04-07 03:36:10.946144 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:36:10.946156 | orchestrator | Tuesday 07 April 2026 03:36:09 +0000 (0:00:00.295) 0:00:00.295 ********* 2026-04-07 03:36:10.946167 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:36:10.946180 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:36:10.946191 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:36:10.946202 | orchestrator | 2026-04-07 03:36:10.946213 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:36:10.946224 | orchestrator | Tuesday 07 April 2026 03:36:10 +0000 (0:00:00.366) 0:00:00.662 ********* 2026-04-07 03:36:10.946235 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-07 03:36:10.946247 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-07 03:36:10.946259 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-07 03:36:10.946270 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-07 03:36:10.946281 | orchestrator | 2026-04-07 03:36:10.946292 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-07 03:36:10.946356 | orchestrator | skipping: no hosts matched 2026-04-07 03:36:10.946371 | orchestrator | 2026-04-07 03:36:10.946382 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:36:10.946394 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:10.946407 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:10.946420 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:36:10.946465 | orchestrator | 2026-04-07 03:36:10.946480 | orchestrator | 2026-04-07 03:36:10.946493 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:36:10.946505 | orchestrator | Tuesday 07 April 2026 03:36:10 +0000 (0:00:00.382) 0:00:01.044 ********* 2026-04-07 03:36:10.946518 | orchestrator | =============================================================================== 2026-04-07 03:36:10.946532 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-04-07 03:36:10.946544 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-04-07 03:36:13.567549 | orchestrator | 2026-04-07 03:36:13 | INFO  | Task 4c37d078-de66-489f-bc11-8a83566f0878 (manila) was prepared for execution. 2026-04-07 03:36:13.567638 | orchestrator | 2026-04-07 03:36:13 | INFO  | It takes a moment until task 4c37d078-de66-489f-bc11-8a83566f0878 (manila) has been started and output is visible here. 2026-04-07 03:36:58.144893 | orchestrator | 2026-04-07 03:36:58.145039 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:36:58.145056 | orchestrator | 2026-04-07 03:36:58.145067 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:36:58.145077 | orchestrator | Tuesday 07 April 2026 03:36:18 +0000 (0:00:00.314) 0:00:00.314 ********* 2026-04-07 03:36:58.145086 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:36:58.145107 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:36:58.145116 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:36:58.145132 | orchestrator | 2026-04-07 03:36:58.145146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:36:58.145167 | orchestrator | Tuesday 07 April 2026 03:36:18 +0000 (0:00:00.370) 0:00:00.684 ********* 2026-04-07 03:36:58.145184 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-07 03:36:58.145219 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-07 03:36:58.145234 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-07 03:36:58.145248 | orchestrator | 2026-04-07 03:36:58.145350 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-07 03:36:58.145365 | orchestrator | 2026-04-07 03:36:58.145380 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-07 03:36:58.145395 | orchestrator | Tuesday 07 April 2026 03:36:19 +0000 (0:00:00.493) 0:00:01.178 ********* 2026-04-07 03:36:58.145428 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:36:58.145446 | orchestrator | 2026-04-07 03:36:58.145462 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-07 03:36:58.145476 | orchestrator | Tuesday 07 April 2026 03:36:19 +0000 (0:00:00.683) 0:00:01.861 ********* 2026-04-07 03:36:58.145487 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:36:58.145498 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:36:58.145508 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:36:58.145519 | orchestrator | 2026-04-07 03:36:58.145529 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-04-07 03:36:58.145540 | orchestrator | Tuesday 07 April 2026 03:36:20 +0000 (0:00:00.533) 0:00:02.394 ********* 2026-04-07 03:36:58.145550 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-04-07 03:36:58.145560 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-04-07 03:36:58.145570 | orchestrator | 2026-04-07 03:36:58.145581 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-04-07 03:36:58.145591 | orchestrator | Tuesday 07 April 2026 03:36:27 +0000 (0:00:06.814) 0:00:09.208 ********* 2026-04-07 03:36:58.145603 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-04-07 03:36:58.145614 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-04-07 03:36:58.145647 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-04-07 03:36:58.145658 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-04-07 03:36:58.145667 | orchestrator | 2026-04-07 03:36:58.145678 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-04-07 03:36:58.145688 | orchestrator | Tuesday 07 April 2026 03:36:40 +0000 (0:00:13.636) 0:00:22.845 ********* 2026-04-07 03:36:58.145698 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:36:58.145708 | orchestrator | 2026-04-07 03:36:58.145719 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-04-07 03:36:58.145730 | orchestrator | Tuesday 07 April 2026 03:36:44 +0000 (0:00:03.366) 0:00:26.212 ********* 2026-04-07 03:36:58.145740 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:36:58.145750 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-04-07 03:36:58.145761 | orchestrator | 2026-04-07 03:36:58.145771 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-04-07 03:36:58.145781 | orchestrator | Tuesday 07 April 2026 03:36:48 +0000 (0:00:04.123) 0:00:30.335 ********* 2026-04-07 03:36:58.145791 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:36:58.145801 | orchestrator | 2026-04-07 03:36:58.145811 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-04-07 03:36:58.145822 | orchestrator | Tuesday 07 April 2026 03:36:51 +0000 (0:00:03.470) 0:00:33.806 ********* 2026-04-07 03:36:58.145832 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-04-07 03:36:58.145843 | orchestrator | 2026-04-07 03:36:58.145852 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-07 03:36:58.145861 | orchestrator | Tuesday 07 April 2026 03:36:55 +0000 (0:00:04.236) 0:00:38.043 ********* 2026-04-07 03:36:58.145892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:36:58.145906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:36:58.145926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:36:58.145951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:36:58.145966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:36:58.145978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:36:58.146002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:09.128766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:09.128887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:09.128917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:09.128926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:09.128933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:09.128940 | orchestrator | 2026-04-07 03:37:09.128948 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-07 03:37:09.128956 | orchestrator | Tuesday 07 April 2026 03:36:58 +0000 (0:00:02.346) 0:00:40.389 ********* 2026-04-07 03:37:09.128963 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:37:09.128972 | orchestrator | 2026-04-07 03:37:09.128982 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-07 03:37:09.128995 | orchestrator | Tuesday 07 April 2026 03:36:58 +0000 (0:00:00.664) 0:00:41.053 ********* 2026-04-07 03:37:09.129011 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:37:09.129021 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:37:09.129032 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:37:09.129041 | orchestrator | 2026-04-07 03:37:09.129052 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-07 03:37:09.129061 | orchestrator | Tuesday 07 April 2026 03:36:59 +0000 (0:00:01.057) 0:00:42.111 ********* 2026-04-07 03:37:09.129071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-07 03:37:09.129099 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-07 03:37:09.129110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-07 03:37:09.129130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-07 03:37:09.129137 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-07 03:37:09.129149 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-07 03:37:09.129155 | orchestrator | 2026-04-07 03:37:09.129162 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-07 03:37:09.129168 | orchestrator | Tuesday 07 April 2026 03:37:01 +0000 (0:00:01.857) 0:00:43.968 ********* 2026-04-07 03:37:09.129175 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-07 03:37:09.129181 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-07 03:37:09.129187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-07 03:37:09.129193 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-07 03:37:09.129202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-07 03:37:09.129213 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-07 03:37:09.129223 | orchestrator | 2026-04-07 03:37:09.129233 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-07 03:37:09.129243 | orchestrator | Tuesday 07 April 2026 03:37:03 +0000 (0:00:01.239) 0:00:45.208 ********* 2026-04-07 03:37:09.129273 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-07 03:37:09.129283 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-07 03:37:09.129292 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-07 03:37:09.129301 | orchestrator | 2026-04-07 03:37:09.129311 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-07 03:37:09.129320 | orchestrator | Tuesday 07 April 2026 03:37:03 +0000 (0:00:00.742) 0:00:45.950 ********* 2026-04-07 03:37:09.129329 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:37:09.129339 | orchestrator | 2026-04-07 03:37:09.129349 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-07 03:37:09.129357 | orchestrator | Tuesday 07 April 2026 03:37:03 +0000 (0:00:00.135) 0:00:46.086 ********* 2026-04-07 03:37:09.129367 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:37:09.129378 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:37:09.129388 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:37:09.129399 | orchestrator | 2026-04-07 03:37:09.129410 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-07 03:37:09.129421 | orchestrator | Tuesday 07 April 2026 03:37:04 +0000 (0:00:00.543) 0:00:46.630 ********* 2026-04-07 03:37:09.129432 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:37:09.129443 | orchestrator | 2026-04-07 03:37:09.129453 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-07 03:37:09.129464 | orchestrator | Tuesday 07 April 2026 03:37:05 +0000 (0:00:00.634) 0:00:47.264 ********* 2026-04-07 03:37:09.129495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:10.057637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:10.057704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:10.057711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:10.057783 | orchestrator | 2026-04-07 03:37:10.057788 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-07 03:37:10.057793 | orchestrator | Tuesday 07 April 2026 03:37:09 +0000 (0:00:04.112) 0:00:51.376 ********* 2026-04-07 03:37:10.057801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:10.777369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777536 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:37:10.777550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:10.777644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777714 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:37:10.777725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:10.777737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:10.777782 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:37:10.777794 | orchestrator | 2026-04-07 03:37:10.777806 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-07 03:37:10.777821 | orchestrator | Tuesday 07 April 2026 03:37:10 +0000 (0:00:00.939) 0:00:52.316 ********* 2026-04-07 03:37:10.777848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:15.514988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515143 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:37:15.515153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:15.515161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515217 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:37:15.515226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:15.515302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:15.515327 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:37:15.515332 | orchestrator | 2026-04-07 03:37:15.515337 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-07 03:37:15.515345 | orchestrator | Tuesday 07 April 2026 03:37:11 +0000 (0:00:00.956) 0:00:53.272 ********* 2026-04-07 03:37:15.515362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:22.714860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:22.714998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:22.715019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:22.715170 | orchestrator | 2026-04-07 03:37:22.715183 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-07 03:37:22.715196 | orchestrator | Tuesday 07 April 2026 03:37:15 +0000 (0:00:04.723) 0:00:57.996 ********* 2026-04-07 03:37:22.715221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:27.452546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:27.452640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:37:27.452649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:27.452656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:27.452671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:27.452689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:27.452698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:27.452702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:27.452707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:27.452711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:27.452716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:37:27.452721 | orchestrator | 2026-04-07 03:37:27.452726 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-07 03:37:27.452732 | orchestrator | Tuesday 07 April 2026 03:37:22 +0000 (0:00:06.980) 0:01:04.976 ********* 2026-04-07 03:37:27.452737 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-07 03:37:27.452745 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-07 03:37:27.452749 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-07 03:37:27.452754 | orchestrator | 2026-04-07 03:37:27.452758 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-07 03:37:27.452766 | orchestrator | Tuesday 07 April 2026 03:37:26 +0000 (0:00:04.027) 0:01:09.004 ********* 2026-04-07 03:37:27.452775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:30.984541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984652 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:37:30.984661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:30.984680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984732 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:37:30.984739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-07 03:37:30.984746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 03:37:30.984776 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:37:30.984783 | orchestrator | 2026-04-07 03:37:30.984790 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-04-07 03:37:30.984798 | orchestrator | Tuesday 07 April 2026 03:37:27 +0000 (0:00:00.716) 0:01:09.720 ********* 2026-04-07 03:37:30.984810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:38:14.014263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:38:14.014413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-07 03:38:14.014430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-07 03:38:14.014566 | orchestrator | 2026-04-07 03:38:14.014574 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-04-07 03:38:14.014582 | orchestrator | Tuesday 07 April 2026 03:37:31 +0000 (0:00:03.556) 0:01:13.276 ********* 2026-04-07 03:38:14.014589 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:38:14.014597 | orchestrator | 2026-04-07 03:38:14.014604 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-04-07 03:38:14.014610 | orchestrator | Tuesday 07 April 2026 03:37:33 +0000 (0:00:02.200) 0:01:15.477 ********* 2026-04-07 03:38:14.014617 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:38:14.014624 | orchestrator | 2026-04-07 03:38:14.014631 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-07 03:38:14.014637 | orchestrator | Tuesday 07 April 2026 03:37:35 +0000 (0:00:02.435) 0:01:17.913 ********* 2026-04-07 03:38:14.014644 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:38:14.014651 | orchestrator | 2026-04-07 03:38:14.014657 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-07 03:38:14.014664 | orchestrator | Tuesday 07 April 2026 03:38:13 +0000 (0:00:38.004) 0:01:55.917 ********* 2026-04-07 03:38:14.014671 | orchestrator | 2026-04-07 03:38:14.014683 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-07 03:38:59.759647 | orchestrator | Tuesday 07 April 2026 03:38:13 +0000 (0:00:00.081) 0:01:55.999 ********* 2026-04-07 03:38:59.759783 | orchestrator | 2026-04-07 03:38:59.759805 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-07 03:38:59.759821 | orchestrator | Tuesday 07 April 2026 03:38:13 +0000 (0:00:00.084) 0:01:56.083 ********* 2026-04-07 03:38:59.759830 | orchestrator | 2026-04-07 03:38:59.759839 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-07 03:38:59.759848 | orchestrator | Tuesday 07 April 2026 03:38:13 +0000 (0:00:00.078) 0:01:56.162 ********* 2026-04-07 03:38:59.759856 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:38:59.759865 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:38:59.759873 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:38:59.759881 | orchestrator | 2026-04-07 03:38:59.759889 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-07 03:38:59.759897 | orchestrator | Tuesday 07 April 2026 03:38:23 +0000 (0:00:09.977) 0:02:06.139 ********* 2026-04-07 03:38:59.759906 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:38:59.759914 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:38:59.759922 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:38:59.759929 | orchestrator | 2026-04-07 03:38:59.759937 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-07 03:38:59.759971 | orchestrator | Tuesday 07 April 2026 03:38:35 +0000 (0:00:11.176) 0:02:17.316 ********* 2026-04-07 03:38:59.759980 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:38:59.759988 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:38:59.759996 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:38:59.760004 | orchestrator | 2026-04-07 03:38:59.760012 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-07 03:38:59.760020 | orchestrator | Tuesday 07 April 2026 03:38:45 +0000 (0:00:10.297) 0:02:27.614 ********* 2026-04-07 03:38:59.760028 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:38:59.760036 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:38:59.760044 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:38:59.760052 | orchestrator | 2026-04-07 03:38:59.760060 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:38:59.760070 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 03:38:59.760079 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 03:38:59.760087 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 03:38:59.760096 | orchestrator | 2026-04-07 03:38:59.760104 | orchestrator | 2026-04-07 03:38:59.760118 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:38:59.760131 | orchestrator | Tuesday 07 April 2026 03:38:59 +0000 (0:00:13.771) 0:02:41.385 ********* 2026-04-07 03:38:59.760178 | orchestrator | =============================================================================== 2026-04-07 03:38:59.760193 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 38.00s 2026-04-07 03:38:59.760207 | orchestrator | manila : Restart manila-share container -------------------------------- 13.77s 2026-04-07 03:38:59.760220 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.64s 2026-04-07 03:38:59.760234 | orchestrator | manila : Restart manila-data container --------------------------------- 11.18s 2026-04-07 03:38:59.760246 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.30s 2026-04-07 03:38:59.760270 | orchestrator | manila : Restart manila-api container ----------------------------------- 9.98s 2026-04-07 03:38:59.760280 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.98s 2026-04-07 03:38:59.760289 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.81s 2026-04-07 03:38:59.760299 | orchestrator | manila : Copying over config.json files for services -------------------- 4.72s 2026-04-07 03:38:59.760308 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 4.24s 2026-04-07 03:38:59.760317 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.12s 2026-04-07 03:38:59.760326 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.11s 2026-04-07 03:38:59.760336 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.03s 2026-04-07 03:38:59.760346 | orchestrator | manila : Check manila containers ---------------------------------------- 3.56s 2026-04-07 03:38:59.760356 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.47s 2026-04-07 03:38:59.760365 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.37s 2026-04-07 03:38:59.760375 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.44s 2026-04-07 03:38:59.760384 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.35s 2026-04-07 03:38:59.760392 | orchestrator | manila : Creating Manila database --------------------------------------- 2.20s 2026-04-07 03:38:59.760400 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.86s 2026-04-07 03:39:00.230096 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-04-07 03:39:12.460793 | orchestrator | 2026-04-07 03:39:12 | INFO  | Task ecd6c6e0-f2cd-4e8d-8ddb-480466a370d9 (netdata) was prepared for execution. 2026-04-07 03:39:12.460897 | orchestrator | 2026-04-07 03:39:12 | INFO  | It takes a moment until task ecd6c6e0-f2cd-4e8d-8ddb-480466a370d9 (netdata) has been started and output is visible here. 2026-04-07 03:40:50.268686 | orchestrator | 2026-04-07 03:40:50.268764 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:40:50.268774 | orchestrator | 2026-04-07 03:40:50.268781 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:40:50.268787 | orchestrator | Tuesday 07 April 2026 03:39:17 +0000 (0:00:00.254) 0:00:00.254 ********* 2026-04-07 03:40:50.268794 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-07 03:40:50.268802 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-07 03:40:50.268809 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-07 03:40:50.268816 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-07 03:40:50.268823 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-07 03:40:50.268830 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-07 03:40:50.268837 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-07 03:40:50.268844 | orchestrator | 2026-04-07 03:40:50.268851 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-07 03:40:50.268858 | orchestrator | 2026-04-07 03:40:50.268864 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-07 03:40:50.268869 | orchestrator | Tuesday 07 April 2026 03:39:18 +0000 (0:00:00.975) 0:00:01.230 ********* 2026-04-07 03:40:50.268877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:40:50.268887 | orchestrator | 2026-04-07 03:40:50.268896 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-07 03:40:50.268903 | orchestrator | Tuesday 07 April 2026 03:39:20 +0000 (0:00:01.480) 0:00:02.710 ********* 2026-04-07 03:40:50.268909 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:40:50.268916 | orchestrator | ok: [testbed-manager] 2026-04-07 03:40:50.268923 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:40:50.268929 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:40:50.268948 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:40:50.268956 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:40:50.268963 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:40:50.268967 | orchestrator | 2026-04-07 03:40:50.268972 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-07 03:40:50.268982 | orchestrator | Tuesday 07 April 2026 03:39:21 +0000 (0:00:01.946) 0:00:04.656 ********* 2026-04-07 03:40:50.268986 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:40:50.268990 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:40:50.268994 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:40:50.268998 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:40:50.269002 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:40:50.269006 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:40:50.269010 | orchestrator | ok: [testbed-manager] 2026-04-07 03:40:50.269014 | orchestrator | 2026-04-07 03:40:50.269018 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-07 03:40:50.269021 | orchestrator | Tuesday 07 April 2026 03:39:24 +0000 (0:00:02.300) 0:00:06.957 ********* 2026-04-07 03:40:50.269026 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:40:50.269029 | orchestrator | changed: [testbed-manager] 2026-04-07 03:40:50.269033 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:40:50.269055 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:40:50.269060 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:40:50.269081 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:40:50.269085 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:40:50.269089 | orchestrator | 2026-04-07 03:40:50.269093 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-07 03:40:50.269108 | orchestrator | Tuesday 07 April 2026 03:39:25 +0000 (0:00:01.656) 0:00:08.613 ********* 2026-04-07 03:40:50.269112 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:40:50.269115 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:40:50.269119 | orchestrator | changed: [testbed-manager] 2026-04-07 03:40:50.269123 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:40:50.269126 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:40:50.269130 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:40:50.269134 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:40:50.269138 | orchestrator | 2026-04-07 03:40:50.269142 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-07 03:40:50.269145 | orchestrator | Tuesday 07 April 2026 03:39:42 +0000 (0:00:16.328) 0:00:24.941 ********* 2026-04-07 03:40:50.269149 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:40:50.269153 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:40:50.269157 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:40:50.269160 | orchestrator | changed: [testbed-manager] 2026-04-07 03:40:50.269164 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:40:50.269168 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:40:50.269171 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:40:50.269175 | orchestrator | 2026-04-07 03:40:50.269179 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-07 03:40:50.269183 | orchestrator | Tuesday 07 April 2026 03:40:23 +0000 (0:00:41.588) 0:01:06.530 ********* 2026-04-07 03:40:50.269188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:40:50.269193 | orchestrator | 2026-04-07 03:40:50.269197 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-07 03:40:50.269201 | orchestrator | Tuesday 07 April 2026 03:40:25 +0000 (0:00:01.749) 0:01:08.279 ********* 2026-04-07 03:40:50.269205 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-07 03:40:50.269209 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-07 03:40:50.269213 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-07 03:40:50.269217 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-07 03:40:50.269231 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-07 03:40:50.269235 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-07 03:40:50.269239 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-07 03:40:50.269243 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-07 03:40:50.269247 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-07 03:40:50.269250 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-07 03:40:50.269254 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-07 03:40:50.269258 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-07 03:40:50.269262 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-07 03:40:50.269267 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-07 03:40:50.269271 | orchestrator | 2026-04-07 03:40:50.269276 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-07 03:40:50.269282 | orchestrator | Tuesday 07 April 2026 03:40:29 +0000 (0:00:03.661) 0:01:11.941 ********* 2026-04-07 03:40:50.269286 | orchestrator | ok: [testbed-manager] 2026-04-07 03:40:50.269291 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:40:50.269296 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:40:50.269300 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:40:50.269309 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:40:50.269313 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:40:50.269317 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:40:50.269322 | orchestrator | 2026-04-07 03:40:50.269326 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-07 03:40:50.269331 | orchestrator | Tuesday 07 April 2026 03:40:30 +0000 (0:00:01.392) 0:01:13.333 ********* 2026-04-07 03:40:50.269335 | orchestrator | changed: [testbed-manager] 2026-04-07 03:40:50.269339 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:40:50.269344 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:40:50.269348 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:40:50.269353 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:40:50.269357 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:40:50.269362 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:40:50.269366 | orchestrator | 2026-04-07 03:40:50.269371 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-07 03:40:50.269375 | orchestrator | Tuesday 07 April 2026 03:40:31 +0000 (0:00:01.358) 0:01:14.691 ********* 2026-04-07 03:40:50.269380 | orchestrator | ok: [testbed-manager] 2026-04-07 03:40:50.269384 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:40:50.269389 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:40:50.269393 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:40:50.269397 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:40:50.269402 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:40:50.269406 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:40:50.269411 | orchestrator | 2026-04-07 03:40:50.269415 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-07 03:40:50.269420 | orchestrator | Tuesday 07 April 2026 03:40:33 +0000 (0:00:01.369) 0:01:16.060 ********* 2026-04-07 03:40:50.269425 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:40:50.269429 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:40:50.269433 | orchestrator | ok: [testbed-manager] 2026-04-07 03:40:50.269438 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:40:50.269442 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:40:50.269447 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:40:50.269451 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:40:50.269455 | orchestrator | 2026-04-07 03:40:50.269460 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-07 03:40:50.269464 | orchestrator | Tuesday 07 April 2026 03:40:35 +0000 (0:00:01.741) 0:01:17.802 ********* 2026-04-07 03:40:50.269469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-07 03:40:50.269479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:40:50.269483 | orchestrator | 2026-04-07 03:40:50.269488 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-07 03:40:50.269492 | orchestrator | Tuesday 07 April 2026 03:40:36 +0000 (0:00:01.546) 0:01:19.349 ********* 2026-04-07 03:40:50.269497 | orchestrator | changed: [testbed-manager] 2026-04-07 03:40:50.269501 | orchestrator | 2026-04-07 03:40:50.269506 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-07 03:40:50.269510 | orchestrator | Tuesday 07 April 2026 03:40:38 +0000 (0:00:02.334) 0:01:21.684 ********* 2026-04-07 03:40:50.269514 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:40:50.269519 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:40:50.269523 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:40:50.269528 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:40:50.269532 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:40:50.269537 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:40:50.269541 | orchestrator | changed: [testbed-manager] 2026-04-07 03:40:50.269546 | orchestrator | 2026-04-07 03:40:50.269550 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:40:50.269558 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:40:50.269564 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:40:50.269569 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:40:50.269574 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:40:50.269581 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:40:50.795316 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:40:50.795392 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:40:50.795399 | orchestrator | 2026-04-07 03:40:50.795405 | orchestrator | 2026-04-07 03:40:50.795411 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:40:50.795417 | orchestrator | Tuesday 07 April 2026 03:40:50 +0000 (0:00:11.275) 0:01:32.959 ********* 2026-04-07 03:40:50.795422 | orchestrator | =============================================================================== 2026-04-07 03:40:50.795427 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.59s 2026-04-07 03:40:50.795432 | orchestrator | osism.services.netdata : Add repository -------------------------------- 16.33s 2026-04-07 03:40:50.795436 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.28s 2026-04-07 03:40:50.795441 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.66s 2026-04-07 03:40:50.795445 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.33s 2026-04-07 03:40:50.795450 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.30s 2026-04-07 03:40:50.795455 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.95s 2026-04-07 03:40:50.795459 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.75s 2026-04-07 03:40:50.795464 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.74s 2026-04-07 03:40:50.795468 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.66s 2026-04-07 03:40:50.795473 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.55s 2026-04-07 03:40:50.795477 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.48s 2026-04-07 03:40:50.795482 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.39s 2026-04-07 03:40:50.795488 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.37s 2026-04-07 03:40:50.795492 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.36s 2026-04-07 03:40:50.795497 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2026-04-07 03:40:53.558333 | orchestrator | 2026-04-07 03:40:53 | INFO  | Task 021fe1c5-efb0-421e-b9fe-afa1455e14b3 (prometheus) was prepared for execution. 2026-04-07 03:40:53.558412 | orchestrator | 2026-04-07 03:40:53 | INFO  | It takes a moment until task 021fe1c5-efb0-421e-b9fe-afa1455e14b3 (prometheus) has been started and output is visible here. 2026-04-07 03:41:03.824713 | orchestrator | 2026-04-07 03:41:03.824845 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:41:03.824870 | orchestrator | 2026-04-07 03:41:03.824880 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:41:03.824919 | orchestrator | Tuesday 07 April 2026 03:40:58 +0000 (0:00:00.306) 0:00:00.306 ********* 2026-04-07 03:41:03.824929 | orchestrator | ok: [testbed-manager] 2026-04-07 03:41:03.824939 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:41:03.824961 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:41:03.824970 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:41:03.824978 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:41:03.824987 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:41:03.824996 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:41:03.825005 | orchestrator | 2026-04-07 03:41:03.825014 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:41:03.825043 | orchestrator | Tuesday 07 April 2026 03:40:59 +0000 (0:00:00.978) 0:00:01.285 ********* 2026-04-07 03:41:03.825053 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-07 03:41:03.825063 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-07 03:41:03.825071 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-07 03:41:03.825080 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-07 03:41:03.825089 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-07 03:41:03.825097 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-07 03:41:03.825106 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-07 03:41:03.825115 | orchestrator | 2026-04-07 03:41:03.825123 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-07 03:41:03.825132 | orchestrator | 2026-04-07 03:41:03.825141 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-07 03:41:03.825149 | orchestrator | Tuesday 07 April 2026 03:41:00 +0000 (0:00:01.053) 0:00:02.339 ********* 2026-04-07 03:41:03.825159 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:41:03.825169 | orchestrator | 2026-04-07 03:41:03.825178 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-07 03:41:03.825186 | orchestrator | Tuesday 07 April 2026 03:41:01 +0000 (0:00:01.467) 0:00:03.806 ********* 2026-04-07 03:41:03.825198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:03.825211 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 03:41:03.825221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:03.825239 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:03.825270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:03.825283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:03.825294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:03.825305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:03.825315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:03.825327 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:03.825339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:03.825361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:04.865786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:04.865792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:04.865800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 03:41:04.865829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:04.865847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865890 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:04.865895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:04.865909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:10.481914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:10.482011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:10.482099 | orchestrator | 2026-04-07 03:41:10.482106 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-07 03:41:10.482111 | orchestrator | Tuesday 07 April 2026 03:41:04 +0000 (0:00:03.013) 0:00:06.819 ********* 2026-04-07 03:41:10.482117 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 03:41:10.482123 | orchestrator | 2026-04-07 03:41:10.482127 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-07 03:41:10.482131 | orchestrator | Tuesday 07 April 2026 03:41:06 +0000 (0:00:01.779) 0:00:08.599 ********* 2026-04-07 03:41:10.482136 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 03:41:10.482165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:10.482170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:10.482174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:10.482207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:10.482211 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:10.482215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:10.482219 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:10.482228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:10.482233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:10.482240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:10.482247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:10.482262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739526 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:12.739571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:12.739581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:12.739695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739707 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 03:41:12.739724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:12.739749 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:12.739758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:12.739773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:14.150437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:14.151761 | orchestrator | 2026-04-07 03:41:14.151851 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-07 03:41:14.151874 | orchestrator | Tuesday 07 April 2026 03:41:12 +0000 (0:00:06.089) 0:00:14.688 ********* 2026-04-07 03:41:14.151896 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 03:41:14.151916 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:14.151933 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:14.152008 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 03:41:14.152139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.152160 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:41:14.152176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:14.152208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.152223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.152240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:14.152256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.152273 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:41:14.152288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:14.152312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.152346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.368517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:14.368623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.368640 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:41:14.368654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:14.368667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.368680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.368718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:14.368729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:14.368761 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:41:14.368793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:14.368806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:14.368817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 03:41:14.368829 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:41:14.368840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:14.368852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:14.368863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 03:41:14.368874 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:41:14.368892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:14.368919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:15.369729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 03:41:15.369811 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:41:15.369823 | orchestrator | 2026-04-07 03:41:15.369831 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-07 03:41:15.369839 | orchestrator | Tuesday 07 April 2026 03:41:14 +0000 (0:00:01.643) 0:00:16.332 ********* 2026-04-07 03:41:15.369847 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-07 03:41:15.369855 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:15.369863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:15.369887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-07 03:41:15.369919 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:15.369924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:15.369929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:15.369933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:15.369937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:15.369941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:15.369949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:15.369956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:15.369966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:16.804986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:16.805187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:16.805197 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:41:16.805204 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:41:16.805209 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:41:16.805214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:16.805219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:16.805224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:16.805255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:16.805261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 03:41:16.805278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:16.805284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:16.805288 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:41:16.805293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 03:41:16.805298 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:41:16.805303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:16.805307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:16.805331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 03:41:16.805336 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:41:16.805341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 03:41:16.805351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 03:41:21.020594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 03:41:21.020673 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:41:21.020682 | orchestrator | 2026-04-07 03:41:21.020690 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-07 03:41:21.020698 | orchestrator | Tuesday 07 April 2026 03:41:16 +0000 (0:00:02.414) 0:00:18.747 ********* 2026-04-07 03:41:21.020706 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 03:41:21.020713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:21.020842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:21.020871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:21.020878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:21.020901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:21.020909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:21.020915 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:41:21.020921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:21.020961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:21.020968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:21.020977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:21.020981 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:21.020992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:23.600453 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:23.600550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:23.600583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:23.600590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:23.600604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:23.600608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:23.600612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:41:23.600629 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 03:41:23.600635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:23.600644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:23.600648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:41:23.600655 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:23.600659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:23.600664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:23.600673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:41:28.074336 | orchestrator | 2026-04-07 03:41:28.074502 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-07 03:41:28.074515 | orchestrator | Tuesday 07 April 2026 03:41:23 +0000 (0:00:06.803) 0:00:25.550 ********* 2026-04-07 03:41:28.074523 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 03:41:28.074531 | orchestrator | 2026-04-07 03:41:28.074538 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-07 03:41:28.074562 | orchestrator | Tuesday 07 April 2026 03:41:24 +0000 (0:00:00.959) 0:00:26.509 ********* 2026-04-07 03:41:28.074572 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092592, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074583 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092592, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074590 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092592, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:28.074608 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092633, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9552524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074618 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092633, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9552524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074625 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092592, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074648 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092592, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074671 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092592, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074678 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092592, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074685 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092633, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9552524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074696 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092583, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9478886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074703 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092633, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9552524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074710 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092583, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9478886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:28.074725 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092633, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9552524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.880543 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092633, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9552524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881453 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092583, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9478886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092633, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9552524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:29.881531 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092583, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9478886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881543 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092616, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.953165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881553 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092616, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.953165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881575 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092583, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9478886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881600 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092583, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9478886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881607 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092616, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.953165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881636 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092577, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881659 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092616, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.953165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881665 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092616, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.953165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881671 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092577, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881683 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092595, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:29.881695 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092616, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.953165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537622 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092577, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537754 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092577, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537791 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092611, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.951586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537804 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092577, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537815 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092595, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537847 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092583, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9478886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:32.537859 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092595, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537890 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092577, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537902 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092600, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537919 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092595, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537930 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092611, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.951586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537949 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092595, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.537990 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092611, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.951586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.538290 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092611, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.951586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:32.538378 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092588, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193083 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092595, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193160 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092611, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.951586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193171 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092600, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193194 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092600, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193201 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092600, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193208 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092600, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193215 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092616, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.953165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:34.193232 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092630, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9539807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193242 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092611, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.951586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193249 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092588, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193261 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092588, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193268 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092572, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9446867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193274 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092588, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193281 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092600, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:34.193292 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092588, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.720947 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092630, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9539807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721042 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092630, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9539807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721061 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092630, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9539807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721067 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092588, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721072 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092630, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9539807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721076 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092577, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:35.721081 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092572, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9446867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721097 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092657, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721106 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092572, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9446867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721110 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092630, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9539807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721115 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092572, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9446867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721120 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092572, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9446867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721125 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092628, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9534092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721130 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092657, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:35.721141 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092657, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180223 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092657, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180285 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092572, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9446867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180295 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092628, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9534092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180303 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092580, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180310 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092628, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9534092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180318 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092657, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180381 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092657, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180415 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092628, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9534092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180423 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092595, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:37.180430 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092580, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180437 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092580, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180444 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092574, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9453628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180451 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092628, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9534092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180467 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092574, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9453628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:37.180479 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092628, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9534092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554159 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092580, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554272 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092574, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9453628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554299 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092605, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9511244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554322 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092605, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9511244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554341 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092580, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554388 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092611, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.951586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:38.554424 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092601, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9501405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554469 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092580, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554490 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092574, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9453628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554508 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092605, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9511244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554520 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092601, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9501405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554531 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092574, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9453628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554550 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092651, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554563 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:41:38.554583 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092574, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9453628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:38.554602 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092605, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9511244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668254 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092651, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668348 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:41:44.668357 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092600, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9497123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:44.668363 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092601, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9501405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668381 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092605, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9511244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668386 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092605, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9511244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668401 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092601, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9501405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668406 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092651, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668422 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:41:44.668427 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092601, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9501405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668431 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092601, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9501405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668436 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092651, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668445 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:41:44.668449 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092651, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668454 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:41:44.668461 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092651, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-07 03:41:44.668465 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:41:44.668470 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092588, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9483857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:41:44.668479 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092630, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9539807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046208 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092572, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9446867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046322 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092657, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046367 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092628, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9534092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046383 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092580, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.945887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046413 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092574, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9453628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046427 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092605, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9511244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046441 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092601, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9501405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046473 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092651, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.957816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-07 03:42:13.046488 | orchestrator | 2026-04-07 03:42:13.046503 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-07 03:42:13.046520 | orchestrator | Tuesday 07 April 2026 03:41:51 +0000 (0:00:26.975) 0:00:53.485 ********* 2026-04-07 03:42:13.046534 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 03:42:13.046545 | orchestrator | 2026-04-07 03:42:13.046553 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-07 03:42:13.046572 | orchestrator | Tuesday 07 April 2026 03:41:52 +0000 (0:00:00.789) 0:00:54.275 ********* 2026-04-07 03:42:13.046585 | orchestrator | [WARNING]: Skipped 2026-04-07 03:42:13.046599 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046612 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-07 03:42:13.046624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046636 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-07 03:42:13.046650 | orchestrator | [WARNING]: Skipped 2026-04-07 03:42:13.046663 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046676 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-07 03:42:13.046688 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046700 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-07 03:42:13.046712 | orchestrator | [WARNING]: Skipped 2026-04-07 03:42:13.046719 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046726 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-07 03:42:13.046733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046741 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-07 03:42:13.046767 | orchestrator | [WARNING]: Skipped 2026-04-07 03:42:13.046776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046792 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-07 03:42:13.046804 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046817 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-07 03:42:13.046832 | orchestrator | [WARNING]: Skipped 2026-04-07 03:42:13.046851 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046864 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-07 03:42:13.046878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046893 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-07 03:42:13.046908 | orchestrator | [WARNING]: Skipped 2026-04-07 03:42:13.046922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046936 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-07 03:42:13.046948 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.046992 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-07 03:42:13.047007 | orchestrator | [WARNING]: Skipped 2026-04-07 03:42:13.047019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.047031 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-07 03:42:13.047043 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-07 03:42:13.047054 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-07 03:42:13.047067 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 03:42:13.047080 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:42:13.047094 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-07 03:42:13.047108 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-07 03:42:13.047120 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-07 03:42:13.047134 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-07 03:42:13.047146 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-07 03:42:13.047159 | orchestrator | 2026-04-07 03:42:13.047172 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-07 03:42:13.047203 | orchestrator | Tuesday 07 April 2026 03:41:54 +0000 (0:00:01.926) 0:00:56.201 ********* 2026-04-07 03:42:13.047217 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 03:42:13.047231 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:42:13.047244 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 03:42:13.047256 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:42:13.047267 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 03:42:13.047277 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:42:13.047302 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 03:42:31.211658 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.212795 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 03:42:31.212864 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.212877 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-07 03:42:31.212887 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.212896 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-07 03:42:31.212905 | orchestrator | 2026-04-07 03:42:31.212916 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-07 03:42:31.212925 | orchestrator | Tuesday 07 April 2026 03:42:13 +0000 (0:00:18.802) 0:01:15.003 ********* 2026-04-07 03:42:31.212934 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 03:42:31.212943 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:42:31.212975 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 03:42:31.212985 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:42:31.212994 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 03:42:31.213002 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:42:31.213012 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 03:42:31.213021 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.213030 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 03:42:31.213039 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.213048 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-07 03:42:31.213056 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.213065 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-07 03:42:31.213074 | orchestrator | 2026-04-07 03:42:31.213083 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-07 03:42:31.213092 | orchestrator | Tuesday 07 April 2026 03:42:15 +0000 (0:00:02.849) 0:01:17.853 ********* 2026-04-07 03:42:31.213101 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 03:42:31.213112 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:42:31.213122 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 03:42:31.213131 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:42:31.213140 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 03:42:31.213148 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.213158 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 03:42:31.213207 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:42:31.213223 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 03:42:31.213246 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.213280 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-07 03:42:31.213311 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-07 03:42:31.213337 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.213353 | orchestrator | 2026-04-07 03:42:31.213368 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-07 03:42:31.213382 | orchestrator | Tuesday 07 April 2026 03:42:17 +0000 (0:00:01.973) 0:01:19.827 ********* 2026-04-07 03:42:31.213397 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 03:42:31.213411 | orchestrator | 2026-04-07 03:42:31.213427 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-07 03:42:31.213446 | orchestrator | Tuesday 07 April 2026 03:42:18 +0000 (0:00:00.794) 0:01:20.622 ********* 2026-04-07 03:42:31.213458 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:42:31.213467 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:42:31.213476 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:42:31.213484 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:42:31.213493 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.213502 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.213510 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.213519 | orchestrator | 2026-04-07 03:42:31.213528 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-07 03:42:31.213537 | orchestrator | Tuesday 07 April 2026 03:42:19 +0000 (0:00:00.833) 0:01:21.455 ********* 2026-04-07 03:42:31.213546 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:42:31.213554 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.213563 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.213572 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.213580 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:42:31.213590 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:42:31.213599 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:42:31.213608 | orchestrator | 2026-04-07 03:42:31.213617 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-07 03:42:31.213652 | orchestrator | Tuesday 07 April 2026 03:42:21 +0000 (0:00:02.428) 0:01:23.884 ********* 2026-04-07 03:42:31.213662 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 03:42:31.213670 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:42:31.213679 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 03:42:31.213688 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 03:42:31.213697 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 03:42:31.213706 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 03:42:31.213715 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:42:31.213724 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:42:31.213733 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:42:31.213742 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.213751 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 03:42:31.213760 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.213768 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-07 03:42:31.213787 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.213797 | orchestrator | 2026-04-07 03:42:31.213806 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-07 03:42:31.213815 | orchestrator | Tuesday 07 April 2026 03:42:23 +0000 (0:00:01.619) 0:01:25.503 ********* 2026-04-07 03:42:31.213824 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 03:42:31.213833 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:42:31.213842 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 03:42:31.213851 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:42:31.213860 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 03:42:31.213869 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.213878 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 03:42:31.213888 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.213896 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 03:42:31.213905 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:42:31.213914 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-07 03:42:31.213923 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.213932 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-07 03:42:31.213941 | orchestrator | 2026-04-07 03:42:31.214088 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-07 03:42:31.214106 | orchestrator | Tuesday 07 April 2026 03:42:25 +0000 (0:00:01.610) 0:01:27.113 ********* 2026-04-07 03:42:31.214115 | orchestrator | [WARNING]: Skipped 2026-04-07 03:42:31.214126 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-07 03:42:31.214135 | orchestrator | due to this access issue: 2026-04-07 03:42:31.214144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-07 03:42:31.214153 | orchestrator | not a directory 2026-04-07 03:42:31.214171 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 03:42:31.214212 | orchestrator | 2026-04-07 03:42:31.214231 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-07 03:42:31.214244 | orchestrator | Tuesday 07 April 2026 03:42:26 +0000 (0:00:01.363) 0:01:28.477 ********* 2026-04-07 03:42:31.214259 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:42:31.214276 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:42:31.214292 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:42:31.214308 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:42:31.214323 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.214338 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.214353 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.214416 | orchestrator | 2026-04-07 03:42:31.214433 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-07 03:42:31.214447 | orchestrator | Tuesday 07 April 2026 03:42:27 +0000 (0:00:01.045) 0:01:29.523 ********* 2026-04-07 03:42:31.214462 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:42:31.214478 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:42:31.214492 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:42:31.214506 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:42:31.214522 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:42:31.214536 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:42:31.214551 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:42:31.214566 | orchestrator | 2026-04-07 03:42:31.214580 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-07 03:42:31.214594 | orchestrator | Tuesday 07 April 2026 03:42:28 +0000 (0:00:00.995) 0:01:30.518 ********* 2026-04-07 03:42:31.214645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:42:33.044384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:42:33.044464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:42:33.044474 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-07 03:42:33.044480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:42:33.044502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:42:33.044512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:42:33.044554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:42:33.044586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:33.044596 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-07 03:42:33.044604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:33.044613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:33.044621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:42:33.044635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:42:33.044650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:42:33.044659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:33.044673 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:42:35.109843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:42:35.110066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:35.110092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:35.110117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-07 03:42:35.110128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:42:35.110161 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-07 03:42:35.110183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:42:35.110189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-07 03:42:35.110194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:35.110199 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:35.110208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:35.110218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 03:42:35.110223 | orchestrator | 2026-04-07 03:42:35.110229 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-07 03:42:35.110236 | orchestrator | Tuesday 07 April 2026 03:42:33 +0000 (0:00:04.488) 0:01:35.006 ********* 2026-04-07 03:42:35.110241 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-07 03:42:35.110246 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:42:35.110251 | orchestrator | 2026-04-07 03:42:35.110256 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 03:42:35.110261 | orchestrator | Tuesday 07 April 2026 03:42:34 +0000 (0:00:01.294) 0:01:36.301 ********* 2026-04-07 03:42:35.110265 | orchestrator | 2026-04-07 03:42:35.110270 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 03:42:35.110274 | orchestrator | Tuesday 07 April 2026 03:42:34 +0000 (0:00:00.262) 0:01:36.564 ********* 2026-04-07 03:42:35.110279 | orchestrator | 2026-04-07 03:42:35.110284 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 03:42:35.110288 | orchestrator | Tuesday 07 April 2026 03:42:34 +0000 (0:00:00.093) 0:01:36.657 ********* 2026-04-07 03:42:35.110293 | orchestrator | 2026-04-07 03:42:35.110297 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 03:42:35.110302 | orchestrator | Tuesday 07 April 2026 03:42:34 +0000 (0:00:00.078) 0:01:36.736 ********* 2026-04-07 03:42:35.110306 | orchestrator | 2026-04-07 03:42:35.110311 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 03:42:35.110315 | orchestrator | Tuesday 07 April 2026 03:42:34 +0000 (0:00:00.069) 0:01:36.805 ********* 2026-04-07 03:42:35.110320 | orchestrator | 2026-04-07 03:42:35.110325 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 03:42:35.110329 | orchestrator | Tuesday 07 April 2026 03:42:34 +0000 (0:00:00.074) 0:01:36.880 ********* 2026-04-07 03:42:35.110334 | orchestrator | 2026-04-07 03:42:35.110338 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-07 03:42:35.110346 | orchestrator | Tuesday 07 April 2026 03:42:34 +0000 (0:00:00.071) 0:01:36.952 ********* 2026-04-07 03:44:32.250417 | orchestrator | 2026-04-07 03:44:32.250531 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-07 03:44:32.250547 | orchestrator | Tuesday 07 April 2026 03:42:35 +0000 (0:00:00.101) 0:01:37.053 ********* 2026-04-07 03:44:32.250558 | orchestrator | changed: [testbed-manager] 2026-04-07 03:44:32.250568 | orchestrator | 2026-04-07 03:44:32.250578 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-07 03:44:32.250589 | orchestrator | Tuesday 07 April 2026 03:43:00 +0000 (0:00:25.902) 0:02:02.956 ********* 2026-04-07 03:44:32.250599 | orchestrator | changed: [testbed-manager] 2026-04-07 03:44:32.250609 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:44:32.250620 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:44:32.250631 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:44:32.250642 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:44:32.250652 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:44:32.250663 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:44:32.250671 | orchestrator | 2026-04-07 03:44:32.250678 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-07 03:44:32.250685 | orchestrator | Tuesday 07 April 2026 03:43:14 +0000 (0:00:13.152) 0:02:16.108 ********* 2026-04-07 03:44:32.250710 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:44:32.250717 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:44:32.250723 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:44:32.250729 | orchestrator | 2026-04-07 03:44:32.250736 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-07 03:44:32.250743 | orchestrator | Tuesday 07 April 2026 03:43:24 +0000 (0:00:10.737) 0:02:26.845 ********* 2026-04-07 03:44:32.250749 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:44:32.250755 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:44:32.250762 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:44:32.250768 | orchestrator | 2026-04-07 03:44:32.250774 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-07 03:44:32.250781 | orchestrator | Tuesday 07 April 2026 03:43:30 +0000 (0:00:05.976) 0:02:32.821 ********* 2026-04-07 03:44:32.250787 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:44:32.250793 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:44:32.250799 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:44:32.250805 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:44:32.250811 | orchestrator | changed: [testbed-manager] 2026-04-07 03:44:32.250817 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:44:32.250823 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:44:32.250829 | orchestrator | 2026-04-07 03:44:32.250836 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-07 03:44:32.250842 | orchestrator | Tuesday 07 April 2026 03:43:45 +0000 (0:00:14.595) 0:02:47.417 ********* 2026-04-07 03:44:32.250886 | orchestrator | changed: [testbed-manager] 2026-04-07 03:44:32.250894 | orchestrator | 2026-04-07 03:44:32.250900 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-07 03:44:32.250917 | orchestrator | Tuesday 07 April 2026 03:43:59 +0000 (0:00:13.679) 0:03:01.096 ********* 2026-04-07 03:44:32.250924 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:44:32.250930 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:44:32.250936 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:44:32.250943 | orchestrator | 2026-04-07 03:44:32.250949 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-07 03:44:32.250955 | orchestrator | Tuesday 07 April 2026 03:44:09 +0000 (0:00:10.711) 0:03:11.808 ********* 2026-04-07 03:44:32.250961 | orchestrator | changed: [testbed-manager] 2026-04-07 03:44:32.250967 | orchestrator | 2026-04-07 03:44:32.250974 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-07 03:44:32.250980 | orchestrator | Tuesday 07 April 2026 03:44:20 +0000 (0:00:11.077) 0:03:22.885 ********* 2026-04-07 03:44:32.250986 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:44:32.250993 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:44:32.251001 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:44:32.251008 | orchestrator | 2026-04-07 03:44:32.251015 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:44:32.251023 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-07 03:44:32.251032 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 03:44:32.251039 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 03:44:32.251046 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-07 03:44:32.251053 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 03:44:32.251060 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 03:44:32.251074 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-07 03:44:32.251081 | orchestrator | 2026-04-07 03:44:32.251088 | orchestrator | 2026-04-07 03:44:32.251096 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:44:32.251103 | orchestrator | Tuesday 07 April 2026 03:44:31 +0000 (0:00:10.674) 0:03:33.559 ********* 2026-04-07 03:44:32.251110 | orchestrator | =============================================================================== 2026-04-07 03:44:32.251118 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.98s 2026-04-07 03:44:32.251142 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 25.90s 2026-04-07 03:44:32.251149 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.80s 2026-04-07 03:44:32.251156 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.60s 2026-04-07 03:44:32.251163 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.68s 2026-04-07 03:44:32.251170 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.15s 2026-04-07 03:44:32.251176 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 11.08s 2026-04-07 03:44:32.251183 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.74s 2026-04-07 03:44:32.251190 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.71s 2026-04-07 03:44:32.251198 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.67s 2026-04-07 03:44:32.251205 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.80s 2026-04-07 03:44:32.251212 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.09s 2026-04-07 03:44:32.251219 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.98s 2026-04-07 03:44:32.251225 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.49s 2026-04-07 03:44:32.251232 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.01s 2026-04-07 03:44:32.251238 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.85s 2026-04-07 03:44:32.251244 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.43s 2026-04-07 03:44:32.251250 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.41s 2026-04-07 03:44:32.251256 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.97s 2026-04-07 03:44:32.251262 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.93s 2026-04-07 03:44:37.517780 | orchestrator | 2026-04-07 03:44:37 | INFO  | Task 6067c987-3163-44e4-bec5-e72edac3d6b6 (grafana) was prepared for execution. 2026-04-07 03:44:37.517951 | orchestrator | 2026-04-07 03:44:37 | INFO  | It takes a moment until task 6067c987-3163-44e4-bec5-e72edac3d6b6 (grafana) has been started and output is visible here. 2026-04-07 03:44:48.326420 | orchestrator | 2026-04-07 03:44:48.326548 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:44:48.326568 | orchestrator | 2026-04-07 03:44:48.326581 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:44:48.326594 | orchestrator | Tuesday 07 April 2026 03:44:42 +0000 (0:00:00.300) 0:00:00.300 ********* 2026-04-07 03:44:48.326608 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:44:48.326622 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:44:48.326634 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:44:48.326647 | orchestrator | 2026-04-07 03:44:48.326659 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:44:48.326671 | orchestrator | Tuesday 07 April 2026 03:44:42 +0000 (0:00:00.394) 0:00:00.695 ********* 2026-04-07 03:44:48.326684 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-07 03:44:48.326719 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-07 03:44:48.326734 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-07 03:44:48.326748 | orchestrator | 2026-04-07 03:44:48.326761 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-07 03:44:48.326776 | orchestrator | 2026-04-07 03:44:48.326790 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-07 03:44:48.326803 | orchestrator | Tuesday 07 April 2026 03:44:43 +0000 (0:00:00.521) 0:00:01.217 ********* 2026-04-07 03:44:48.326818 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:44:48.326827 | orchestrator | 2026-04-07 03:44:48.326863 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-07 03:44:48.326872 | orchestrator | Tuesday 07 April 2026 03:44:43 +0000 (0:00:00.633) 0:00:01.850 ********* 2026-04-07 03:44:48.326884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:48.326896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:48.326904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:48.326913 | orchestrator | 2026-04-07 03:44:48.326921 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-07 03:44:48.326928 | orchestrator | Tuesday 07 April 2026 03:44:44 +0000 (0:00:00.975) 0:00:02.825 ********* 2026-04-07 03:44:48.326937 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-07 03:44:48.326947 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-07 03:44:48.326956 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:44:48.326966 | orchestrator | 2026-04-07 03:44:48.326975 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-07 03:44:48.326985 | orchestrator | Tuesday 07 April 2026 03:44:45 +0000 (0:00:00.964) 0:00:03.790 ********* 2026-04-07 03:44:48.326995 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:44:48.327013 | orchestrator | 2026-04-07 03:44:48.327023 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-07 03:44:48.327033 | orchestrator | Tuesday 07 April 2026 03:44:46 +0000 (0:00:00.599) 0:00:04.390 ********* 2026-04-07 03:44:48.327067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:48.327078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:48.327088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:48.327097 | orchestrator | 2026-04-07 03:44:48.327106 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-07 03:44:48.327116 | orchestrator | Tuesday 07 April 2026 03:44:47 +0000 (0:00:01.382) 0:00:05.773 ********* 2026-04-07 03:44:48.327125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 03:44:48.327139 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:44:48.327157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 03:44:48.327184 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:44:48.327214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 03:44:55.691911 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:44:55.692052 | orchestrator | 2026-04-07 03:44:55.692081 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-07 03:44:55.692101 | orchestrator | Tuesday 07 April 2026 03:44:48 +0000 (0:00:00.640) 0:00:06.413 ********* 2026-04-07 03:44:55.692122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 03:44:55.692145 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:44:55.692167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 03:44:55.692187 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:44:55.692207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-07 03:44:55.692226 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:44:55.692245 | orchestrator | 2026-04-07 03:44:55.692264 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-07 03:44:55.692310 | orchestrator | Tuesday 07 April 2026 03:44:49 +0000 (0:00:00.722) 0:00:07.136 ********* 2026-04-07 03:44:55.692346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:55.692401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:55.692468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:55.692492 | orchestrator | 2026-04-07 03:44:55.692511 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-07 03:44:55.692531 | orchestrator | Tuesday 07 April 2026 03:44:50 +0000 (0:00:01.327) 0:00:08.464 ********* 2026-04-07 03:44:55.692549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:55.692570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:55.692591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:44:55.692627 | orchestrator | 2026-04-07 03:44:55.692647 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-07 03:44:55.692667 | orchestrator | Tuesday 07 April 2026 03:44:52 +0000 (0:00:01.712) 0:00:10.177 ********* 2026-04-07 03:44:55.692687 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:44:55.692707 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:44:55.692726 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:44:55.692744 | orchestrator | 2026-04-07 03:44:55.692764 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-07 03:44:55.692782 | orchestrator | Tuesday 07 April 2026 03:44:52 +0000 (0:00:00.370) 0:00:10.547 ********* 2026-04-07 03:44:55.692801 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-07 03:44:55.692821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-07 03:44:55.692869 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-07 03:44:55.692886 | orchestrator | 2026-04-07 03:44:55.692898 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-07 03:44:55.692909 | orchestrator | Tuesday 07 April 2026 03:44:53 +0000 (0:00:01.374) 0:00:11.921 ********* 2026-04-07 03:44:55.692921 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-07 03:44:55.692932 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-07 03:44:55.692952 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-07 03:44:55.692963 | orchestrator | 2026-04-07 03:44:55.692974 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-07 03:44:55.692997 | orchestrator | Tuesday 07 April 2026 03:44:55 +0000 (0:00:01.853) 0:00:13.774 ********* 2026-04-07 03:45:02.588236 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:45:02.588338 | orchestrator | 2026-04-07 03:45:02.588358 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-07 03:45:02.588373 | orchestrator | Tuesday 07 April 2026 03:44:56 +0000 (0:00:00.874) 0:00:14.649 ********* 2026-04-07 03:45:02.588387 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-07 03:45:02.588401 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-07 03:45:02.588413 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:45:02.588428 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:45:02.588441 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:45:02.588453 | orchestrator | 2026-04-07 03:45:02.588466 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-07 03:45:02.588480 | orchestrator | Tuesday 07 April 2026 03:44:57 +0000 (0:00:00.754) 0:00:15.404 ********* 2026-04-07 03:45:02.588493 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:45:02.588506 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:45:02.588519 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:45:02.588533 | orchestrator | 2026-04-07 03:45:02.588546 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-07 03:45:02.588558 | orchestrator | Tuesday 07 April 2026 03:44:57 +0000 (0:00:00.376) 0:00:15.781 ********* 2026-04-07 03:45:02.588574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092145, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.860664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092145, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.860664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092145, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.860664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092231, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.873462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092231, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.873462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092231, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.873462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092169, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.863139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092169, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.863139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092169, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.863139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092235, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8769953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092235, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8769953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:02.588853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092235, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8769953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092187, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8669236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092187, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8669236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092187, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8669236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092217, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8711064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092217, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8711064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092217, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8711064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092140, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8592129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092140, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8592129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092140, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8592129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092158, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092158, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092158, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:06.419561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092170, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8639326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.238291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092170, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8639326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092170, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8639326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092196, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8681905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092196, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8681905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092196, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8681905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092227, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8727415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092227, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8727415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092227, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8727415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092164, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.862679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092164, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.862679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092164, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.862679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092211, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8703601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:10.239208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092211, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8703601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092211, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8703601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092191, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8676734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092191, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8676734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092191, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8676734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092181, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.866139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092181, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.866139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092181, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.866139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092179, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.865804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092179, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.865804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092179, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.865804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092201, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.86936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092201, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.86936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:14.637743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092201, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.86936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.623787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092173, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8652866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.623955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092173, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8652866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.623969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092173, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8652866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.623979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092223, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.871139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092223, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.871139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092223, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.871139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092553, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9431403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092553, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9431403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092553, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9431403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092289, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8861392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092289, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8861392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092289, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8861392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:18.624201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092269, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8796551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092269, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8796551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092269, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8796551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9226363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9226363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092428, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9226363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092256, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.877539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092256, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.877539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092256, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.877539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092493, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.934567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092493, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.934567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092493, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.934567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092431, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.931323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:22.904652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092431, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.931323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092431, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.931323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092501, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9352493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092501, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9352493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092501, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9352493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092544, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.941296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092544, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.941296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092544, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.941296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092487, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.93314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092487, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.93314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092487, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.93314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092308, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9021394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092308, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9021394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:26.724758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092282, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8828375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092308, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9021394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092282, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8828375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092304, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8869352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092282, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8828375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092304, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8869352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092273, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8815513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092304, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8869352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092273, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8815513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092359, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9226363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092273, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8815513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092359, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9226363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092359, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9226363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:30.390920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092522, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9401402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092522, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9401402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092522, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9401402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092514, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.936981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092514, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.936981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092514, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.936981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092258, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.878169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092258, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.878169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092258, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.878169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092260, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8788338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092260, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8788338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092260, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.8788338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092478, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.93214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:45:35.033878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092478, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.93214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:47:18.240966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092478, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.93214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:47:18.241146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092508, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9359305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:47:18.241170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092508, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9359305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:47:18.241187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092508, 'dev': 117, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775525912.9359305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-07 03:47:18.241203 | orchestrator | 2026-04-07 03:47:18.241221 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-07 03:47:18.241237 | orchestrator | Tuesday 07 April 2026 03:45:36 +0000 (0:00:38.617) 0:00:54.398 ********* 2026-04-07 03:47:18.241252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:47:18.241319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:47:18.241337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-07 03:47:18.241352 | orchestrator | 2026-04-07 03:47:18.241366 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-07 03:47:18.241380 | orchestrator | Tuesday 07 April 2026 03:45:37 +0000 (0:00:01.039) 0:00:55.437 ********* 2026-04-07 03:47:18.241394 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:47:18.241408 | orchestrator | 2026-04-07 03:47:18.241422 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-07 03:47:18.241436 | orchestrator | Tuesday 07 April 2026 03:45:39 +0000 (0:00:02.484) 0:00:57.922 ********* 2026-04-07 03:47:18.241452 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:47:18.241467 | orchestrator | 2026-04-07 03:47:18.241548 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-07 03:47:18.241568 | orchestrator | Tuesday 07 April 2026 03:45:42 +0000 (0:00:02.569) 0:01:00.492 ********* 2026-04-07 03:47:18.241585 | orchestrator | 2026-04-07 03:47:18.241603 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-07 03:47:18.241618 | orchestrator | Tuesday 07 April 2026 03:45:42 +0000 (0:00:00.094) 0:01:00.586 ********* 2026-04-07 03:47:18.241633 | orchestrator | 2026-04-07 03:47:18.241648 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-07 03:47:18.241663 | orchestrator | Tuesday 07 April 2026 03:45:42 +0000 (0:00:00.077) 0:01:00.664 ********* 2026-04-07 03:47:18.241678 | orchestrator | 2026-04-07 03:47:18.241693 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-07 03:47:18.241707 | orchestrator | Tuesday 07 April 2026 03:45:42 +0000 (0:00:00.075) 0:01:00.739 ********* 2026-04-07 03:47:18.241800 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:47:18.241814 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:47:18.241828 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:47:18.241841 | orchestrator | 2026-04-07 03:47:18.241855 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-07 03:47:18.241870 | orchestrator | Tuesday 07 April 2026 03:45:44 +0000 (0:00:02.315) 0:01:03.055 ********* 2026-04-07 03:47:18.241884 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:47:18.241898 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:47:18.241912 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-07 03:47:18.241927 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-07 03:47:18.241961 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-04-07 03:47:18.241977 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-04-07 03:47:18.241991 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:47:18.242005 | orchestrator | 2026-04-07 03:47:18.242087 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-07 03:47:18.242106 | orchestrator | Tuesday 07 April 2026 03:46:36 +0000 (0:00:51.598) 0:01:54.653 ********* 2026-04-07 03:47:18.242121 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:47:18.242136 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:47:18.242149 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:47:18.242158 | orchestrator | 2026-04-07 03:47:18.242166 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-07 03:47:18.242176 | orchestrator | Tuesday 07 April 2026 03:47:12 +0000 (0:00:36.100) 0:02:30.754 ********* 2026-04-07 03:47:18.242184 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:47:18.242193 | orchestrator | 2026-04-07 03:47:18.242201 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-07 03:47:18.242210 | orchestrator | Tuesday 07 April 2026 03:47:15 +0000 (0:00:02.429) 0:02:33.183 ********* 2026-04-07 03:47:18.242218 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:47:18.242227 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:47:18.242236 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:47:18.242244 | orchestrator | 2026-04-07 03:47:18.242253 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-07 03:47:18.242273 | orchestrator | Tuesday 07 April 2026 03:47:15 +0000 (0:00:00.347) 0:02:33.531 ********* 2026-04-07 03:47:18.242284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-07 03:47:18.242314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-07 03:47:18.976342 | orchestrator | 2026-04-07 03:47:18.976445 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-07 03:47:18.976462 | orchestrator | Tuesday 07 April 2026 03:47:18 +0000 (0:00:02.793) 0:02:36.325 ********* 2026-04-07 03:47:18.976474 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:47:18.976487 | orchestrator | 2026-04-07 03:47:18.976498 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:47:18.976511 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 03:47:18.976524 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 03:47:18.976535 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 03:47:18.976546 | orchestrator | 2026-04-07 03:47:18.976556 | orchestrator | 2026-04-07 03:47:18.976568 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:47:18.976579 | orchestrator | Tuesday 07 April 2026 03:47:18 +0000 (0:00:00.327) 0:02:36.652 ********* 2026-04-07 03:47:18.976590 | orchestrator | =============================================================================== 2026-04-07 03:47:18.976618 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.60s 2026-04-07 03:47:18.976630 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.62s 2026-04-07 03:47:18.976663 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.10s 2026-04-07 03:47:18.976675 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.79s 2026-04-07 03:47:18.976686 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.57s 2026-04-07 03:47:18.976698 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.48s 2026-04-07 03:47:18.976773 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.43s 2026-04-07 03:47:18.976786 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.32s 2026-04-07 03:47:18.976797 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.85s 2026-04-07 03:47:18.976808 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.71s 2026-04-07 03:47:18.976819 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.38s 2026-04-07 03:47:18.976830 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.37s 2026-04-07 03:47:18.976841 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-04-07 03:47:18.976852 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2026-04-07 03:47:18.976863 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.98s 2026-04-07 03:47:18.976873 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.96s 2026-04-07 03:47:18.976884 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.87s 2026-04-07 03:47:18.976895 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2026-04-07 03:47:18.976906 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.72s 2026-04-07 03:47:18.976917 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.64s 2026-04-07 03:47:19.364494 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-04-07 03:47:19.372061 | orchestrator | + set -e 2026-04-07 03:47:19.372127 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 03:47:19.372135 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 03:47:19.372140 | orchestrator | ++ INTERACTIVE=false 2026-04-07 03:47:19.372144 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 03:47:19.372148 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 03:47:19.372152 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 03:47:19.372156 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 03:47:19.372160 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 03:47:19.372164 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 03:47:19.372168 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 03:47:19.372172 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 03:47:19.372176 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 03:47:19.372180 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 03:47:19.372184 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 03:47:19.372189 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 03:47:19.372193 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 03:47:19.372197 | orchestrator | ++ export ARA=false 2026-04-07 03:47:19.372201 | orchestrator | ++ ARA=false 2026-04-07 03:47:19.372205 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 03:47:19.372209 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 03:47:19.372212 | orchestrator | ++ export TEMPEST=false 2026-04-07 03:47:19.372216 | orchestrator | ++ TEMPEST=false 2026-04-07 03:47:19.372220 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 03:47:19.372224 | orchestrator | ++ IS_ZUUL=true 2026-04-07 03:47:19.372227 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 03:47:19.372232 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 03:47:19.372235 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 03:47:19.372239 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 03:47:19.372243 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 03:47:19.372246 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 03:47:19.372250 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 03:47:19.372254 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 03:47:19.372258 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 03:47:19.372262 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 03:47:19.372964 | orchestrator | ++ semver 9.5.0 8.0.0 2026-04-07 03:47:19.436825 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 03:47:19.436896 | orchestrator | + osism apply clusterapi 2026-04-07 03:47:22.546669 | orchestrator | 2026-04-07 03:47:22 | INFO  | Task 761762dd-5259-4ddb-925c-baad155ecbe0 (clusterapi) was prepared for execution. 2026-04-07 03:47:22.546821 | orchestrator | 2026-04-07 03:47:22 | INFO  | It takes a moment until task 761762dd-5259-4ddb-925c-baad155ecbe0 (clusterapi) has been started and output is visible here. 2026-04-07 03:48:21.423827 | orchestrator | 2026-04-07 03:48:21.423939 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-07 03:48:21.423959 | orchestrator | 2026-04-07 03:48:21.423969 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-07 03:48:21.423980 | orchestrator | Tuesday 07 April 2026 03:47:27 +0000 (0:00:00.222) 0:00:00.223 ********* 2026-04-07 03:48:21.423991 | orchestrator | included: cert_manager for testbed-manager 2026-04-07 03:48:21.424002 | orchestrator | 2026-04-07 03:48:21.424012 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-07 03:48:21.424020 | orchestrator | Tuesday 07 April 2026 03:47:27 +0000 (0:00:00.292) 0:00:00.515 ********* 2026-04-07 03:48:21.424029 | orchestrator | changed: [testbed-manager] 2026-04-07 03:48:21.424038 | orchestrator | 2026-04-07 03:48:21.424045 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-07 03:48:21.424055 | orchestrator | Tuesday 07 April 2026 03:47:33 +0000 (0:00:05.709) 0:00:06.225 ********* 2026-04-07 03:48:21.424064 | orchestrator | changed: [testbed-manager] 2026-04-07 03:48:21.424074 | orchestrator | 2026-04-07 03:48:21.424084 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-07 03:48:21.424094 | orchestrator | 2026-04-07 03:48:21.424103 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-07 03:48:21.424113 | orchestrator | Tuesday 07 April 2026 03:47:59 +0000 (0:00:25.929) 0:00:32.154 ********* 2026-04-07 03:48:21.424119 | orchestrator | ok: [testbed-manager] 2026-04-07 03:48:21.424125 | orchestrator | 2026-04-07 03:48:21.424131 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-07 03:48:21.424137 | orchestrator | Tuesday 07 April 2026 03:48:00 +0000 (0:00:01.248) 0:00:33.403 ********* 2026-04-07 03:48:21.424157 | orchestrator | ok: [testbed-manager] 2026-04-07 03:48:21.424163 | orchestrator | 2026-04-07 03:48:21.424168 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-07 03:48:21.424174 | orchestrator | Tuesday 07 April 2026 03:48:00 +0000 (0:00:00.148) 0:00:33.552 ********* 2026-04-07 03:48:21.424180 | orchestrator | ok: [testbed-manager] 2026-04-07 03:48:21.424185 | orchestrator | 2026-04-07 03:48:21.424191 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-07 03:48:21.424196 | orchestrator | Tuesday 07 April 2026 03:48:18 +0000 (0:00:17.459) 0:00:51.012 ********* 2026-04-07 03:48:21.424201 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:48:21.424207 | orchestrator | 2026-04-07 03:48:21.424212 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-07 03:48:21.424218 | orchestrator | Tuesday 07 April 2026 03:48:18 +0000 (0:00:00.152) 0:00:51.165 ********* 2026-04-07 03:48:21.424223 | orchestrator | changed: [testbed-manager] 2026-04-07 03:48:21.424228 | orchestrator | 2026-04-07 03:48:21.424234 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:48:21.424241 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 03:48:21.424248 | orchestrator | 2026-04-07 03:48:21.424253 | orchestrator | 2026-04-07 03:48:21.424259 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:48:21.424264 | orchestrator | Tuesday 07 April 2026 03:48:20 +0000 (0:00:02.372) 0:00:53.537 ********* 2026-04-07 03:48:21.424270 | orchestrator | =============================================================================== 2026-04-07 03:48:21.424275 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 25.93s 2026-04-07 03:48:21.424299 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.46s 2026-04-07 03:48:21.424305 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.71s 2026-04-07 03:48:21.424310 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.37s 2026-04-07 03:48:21.424316 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.25s 2026-04-07 03:48:21.424321 | orchestrator | Include cert_manager role ----------------------------------------------- 0.29s 2026-04-07 03:48:21.424327 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.15s 2026-04-07 03:48:21.424332 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.15s 2026-04-07 03:48:21.825989 | orchestrator | + osism apply magnum 2026-04-07 03:48:24.088847 | orchestrator | 2026-04-07 03:48:24 | INFO  | Task 682642ad-8331-4343-a9ad-93d839c0d6bf (magnum) was prepared for execution. 2026-04-07 03:48:24.088916 | orchestrator | 2026-04-07 03:48:24 | INFO  | It takes a moment until task 682642ad-8331-4343-a9ad-93d839c0d6bf (magnum) has been started and output is visible here. 2026-04-07 03:49:09.702934 | orchestrator | 2026-04-07 03:49:09.703077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:49:09.703095 | orchestrator | 2026-04-07 03:49:09.703107 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:49:09.703120 | orchestrator | Tuesday 07 April 2026 03:48:28 +0000 (0:00:00.289) 0:00:00.289 ********* 2026-04-07 03:49:09.703132 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:49:09.703158 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:49:09.703169 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:49:09.703180 | orchestrator | 2026-04-07 03:49:09.703191 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:49:09.703202 | orchestrator | Tuesday 07 April 2026 03:48:29 +0000 (0:00:00.342) 0:00:00.631 ********* 2026-04-07 03:49:09.703213 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-07 03:49:09.703226 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-07 03:49:09.703233 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-07 03:49:09.703241 | orchestrator | 2026-04-07 03:49:09.703248 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-07 03:49:09.703255 | orchestrator | 2026-04-07 03:49:09.703262 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-07 03:49:09.703269 | orchestrator | Tuesday 07 April 2026 03:48:29 +0000 (0:00:00.521) 0:00:01.153 ********* 2026-04-07 03:49:09.703276 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:49:09.703284 | orchestrator | 2026-04-07 03:49:09.703291 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-07 03:49:09.703298 | orchestrator | Tuesday 07 April 2026 03:48:30 +0000 (0:00:00.644) 0:00:01.798 ********* 2026-04-07 03:49:09.703305 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-07 03:49:09.703312 | orchestrator | 2026-04-07 03:49:09.703318 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-07 03:49:09.703325 | orchestrator | Tuesday 07 April 2026 03:48:34 +0000 (0:00:03.773) 0:00:05.571 ********* 2026-04-07 03:49:09.703332 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-07 03:49:09.703339 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-07 03:49:09.703346 | orchestrator | 2026-04-07 03:49:09.703356 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-07 03:49:09.703368 | orchestrator | Tuesday 07 April 2026 03:48:40 +0000 (0:00:06.935) 0:00:12.507 ********* 2026-04-07 03:49:09.703378 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-07 03:49:09.703419 | orchestrator | 2026-04-07 03:49:09.703469 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-07 03:49:09.703496 | orchestrator | Tuesday 07 April 2026 03:48:44 +0000 (0:00:03.718) 0:00:16.226 ********* 2026-04-07 03:49:09.703509 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-07 03:49:09.703538 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-07 03:49:09.703550 | orchestrator | 2026-04-07 03:49:09.703562 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-07 03:49:09.703570 | orchestrator | Tuesday 07 April 2026 03:48:48 +0000 (0:00:04.236) 0:00:20.462 ********* 2026-04-07 03:49:09.703578 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-07 03:49:09.703586 | orchestrator | 2026-04-07 03:49:09.703594 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-07 03:49:09.703602 | orchestrator | Tuesday 07 April 2026 03:48:52 +0000 (0:00:03.533) 0:00:23.996 ********* 2026-04-07 03:49:09.703611 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-07 03:49:09.703694 | orchestrator | 2026-04-07 03:49:09.703703 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-07 03:49:09.703710 | orchestrator | Tuesday 07 April 2026 03:48:56 +0000 (0:00:04.041) 0:00:28.037 ********* 2026-04-07 03:49:09.703717 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:49:09.703724 | orchestrator | 2026-04-07 03:49:09.703730 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-07 03:49:09.703737 | orchestrator | Tuesday 07 April 2026 03:49:00 +0000 (0:00:03.562) 0:00:31.600 ********* 2026-04-07 03:49:09.703744 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:49:09.703750 | orchestrator | 2026-04-07 03:49:09.703757 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-07 03:49:09.703764 | orchestrator | Tuesday 07 April 2026 03:49:04 +0000 (0:00:04.162) 0:00:35.762 ********* 2026-04-07 03:49:09.703771 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:49:09.703777 | orchestrator | 2026-04-07 03:49:09.703784 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-07 03:49:09.703791 | orchestrator | Tuesday 07 April 2026 03:49:08 +0000 (0:00:03.778) 0:00:39.541 ********* 2026-04-07 03:49:09.703830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:09.703843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:09.703864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:09.703872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:09.703880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:09.703894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:17.788015 | orchestrator | 2026-04-07 03:49:17.788103 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-07 03:49:17.788117 | orchestrator | Tuesday 07 April 2026 03:49:09 +0000 (0:00:01.669) 0:00:41.211 ********* 2026-04-07 03:49:17.788125 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:49:17.788136 | orchestrator | 2026-04-07 03:49:17.788145 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-07 03:49:17.788150 | orchestrator | Tuesday 07 April 2026 03:49:09 +0000 (0:00:00.150) 0:00:41.361 ********* 2026-04-07 03:49:17.788155 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:49:17.788160 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:49:17.788165 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:49:17.788188 | orchestrator | 2026-04-07 03:49:17.788193 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-07 03:49:17.788198 | orchestrator | Tuesday 07 April 2026 03:49:10 +0000 (0:00:00.333) 0:00:41.694 ********* 2026-04-07 03:49:17.788206 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 03:49:17.788213 | orchestrator | 2026-04-07 03:49:17.788222 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-07 03:49:17.788229 | orchestrator | Tuesday 07 April 2026 03:49:11 +0000 (0:00:00.997) 0:00:42.692 ********* 2026-04-07 03:49:17.788240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:17.788263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:17.788269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:17.788286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:17.788298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:17.788304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:17.788308 | orchestrator | 2026-04-07 03:49:17.788317 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-07 03:49:17.788321 | orchestrator | Tuesday 07 April 2026 03:49:13 +0000 (0:00:02.619) 0:00:45.312 ********* 2026-04-07 03:49:17.788326 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:49:17.788332 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:49:17.788336 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:49:17.788341 | orchestrator | 2026-04-07 03:49:17.788345 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-07 03:49:17.788350 | orchestrator | Tuesday 07 April 2026 03:49:14 +0000 (0:00:00.583) 0:00:45.895 ********* 2026-04-07 03:49:17.788355 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:49:17.788360 | orchestrator | 2026-04-07 03:49:17.788365 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-07 03:49:17.788369 | orchestrator | Tuesday 07 April 2026 03:49:15 +0000 (0:00:00.702) 0:00:46.598 ********* 2026-04-07 03:49:17.788374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:17.788384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:18.864063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:18.864170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:18.864181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:18.864189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:18.864198 | orchestrator | 2026-04-07 03:49:18.864208 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-07 03:49:18.864218 | orchestrator | Tuesday 07 April 2026 03:49:17 +0000 (0:00:02.707) 0:00:49.305 ********* 2026-04-07 03:49:18.864259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:18.864268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:18.864276 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:49:18.864289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:18.864297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:18.864305 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:49:18.864312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:18.864331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:22.763333 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:49:22.763518 | orchestrator | 2026-04-07 03:49:22.763539 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-07 03:49:22.763552 | orchestrator | Tuesday 07 April 2026 03:49:18 +0000 (0:00:01.068) 0:00:50.374 ********* 2026-04-07 03:49:22.763565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:22.763736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:22.763756 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:49:22.763768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:22.763804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:22.763816 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:49:22.763847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:22.763862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:22.763876 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:49:22.763888 | orchestrator | 2026-04-07 03:49:22.763902 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-07 03:49:22.763915 | orchestrator | Tuesday 07 April 2026 03:49:19 +0000 (0:00:00.948) 0:00:51.322 ********* 2026-04-07 03:49:22.763934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:22.763946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:22.763974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:29.493452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:29.493545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:29.493554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:29.493574 | orchestrator | 2026-04-07 03:49:29.493580 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-07 03:49:29.493586 | orchestrator | Tuesday 07 April 2026 03:49:22 +0000 (0:00:02.960) 0:00:54.283 ********* 2026-04-07 03:49:29.493590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:29.493649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:29.493655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:29.493662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:29.493667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:29.493675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:29.493680 | orchestrator | 2026-04-07 03:49:29.493684 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-07 03:49:29.493688 | orchestrator | Tuesday 07 April 2026 03:49:28 +0000 (0:00:05.996) 0:01:00.279 ********* 2026-04-07 03:49:29.493722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:31.605062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:31.605145 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:49:31.605170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:31.605198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:31.605205 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:49:31.605217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-07 03:49:31.605245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 03:49:31.605256 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:49:31.605267 | orchestrator | 2026-04-07 03:49:31.605278 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-07 03:49:31.605290 | orchestrator | Tuesday 07 April 2026 03:49:29 +0000 (0:00:00.734) 0:01:01.013 ********* 2026-04-07 03:49:31.605307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:31.605327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:31.605338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-07 03:49:31.605350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:49:31.605370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:50:21.361777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-07 03:50:21.361985 | orchestrator | 2026-04-07 03:50:21.362002 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-07 03:50:21.362078 | orchestrator | Tuesday 07 April 2026 03:49:31 +0000 (0:00:02.109) 0:01:03.123 ********* 2026-04-07 03:50:21.362093 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:50:21.362104 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:50:21.362114 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:50:21.362123 | orchestrator | 2026-04-07 03:50:21.362133 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-07 03:50:21.362142 | orchestrator | Tuesday 07 April 2026 03:49:32 +0000 (0:00:00.609) 0:01:03.732 ********* 2026-04-07 03:50:21.362152 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:50:21.362162 | orchestrator | 2026-04-07 03:50:21.362172 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-07 03:50:21.362182 | orchestrator | Tuesday 07 April 2026 03:49:34 +0000 (0:00:02.366) 0:01:06.099 ********* 2026-04-07 03:50:21.362189 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:50:21.362197 | orchestrator | 2026-04-07 03:50:21.362205 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-07 03:50:21.362213 | orchestrator | Tuesday 07 April 2026 03:49:37 +0000 (0:00:02.513) 0:01:08.612 ********* 2026-04-07 03:50:21.362222 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:50:21.362231 | orchestrator | 2026-04-07 03:50:21.362239 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-07 03:50:21.362248 | orchestrator | Tuesday 07 April 2026 03:49:54 +0000 (0:00:17.458) 0:01:26.070 ********* 2026-04-07 03:50:21.362256 | orchestrator | 2026-04-07 03:50:21.362265 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-07 03:50:21.362274 | orchestrator | Tuesday 07 April 2026 03:49:54 +0000 (0:00:00.079) 0:01:26.150 ********* 2026-04-07 03:50:21.362283 | orchestrator | 2026-04-07 03:50:21.362292 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-07 03:50:21.362300 | orchestrator | Tuesday 07 April 2026 03:49:54 +0000 (0:00:00.076) 0:01:26.226 ********* 2026-04-07 03:50:21.362308 | orchestrator | 2026-04-07 03:50:21.362317 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-07 03:50:21.362325 | orchestrator | Tuesday 07 April 2026 03:49:54 +0000 (0:00:00.076) 0:01:26.303 ********* 2026-04-07 03:50:21.362334 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:50:21.362343 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:50:21.362351 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:50:21.362360 | orchestrator | 2026-04-07 03:50:21.362370 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-07 03:50:21.362380 | orchestrator | Tuesday 07 April 2026 03:50:09 +0000 (0:00:14.849) 0:01:41.152 ********* 2026-04-07 03:50:21.362389 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:50:21.362398 | orchestrator | changed: [testbed-node-1] 2026-04-07 03:50:21.362408 | orchestrator | changed: [testbed-node-2] 2026-04-07 03:50:21.362416 | orchestrator | 2026-04-07 03:50:21.362425 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:50:21.362435 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 03:50:21.362447 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 03:50:21.362456 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 03:50:21.362464 | orchestrator | 2026-04-07 03:50:21.362472 | orchestrator | 2026-04-07 03:50:21.362493 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:50:21.362513 | orchestrator | Tuesday 07 April 2026 03:50:20 +0000 (0:00:11.280) 0:01:52.433 ********* 2026-04-07 03:50:21.362521 | orchestrator | =============================================================================== 2026-04-07 03:50:21.362529 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.46s 2026-04-07 03:50:21.362536 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.85s 2026-04-07 03:50:21.362544 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.28s 2026-04-07 03:50:21.362553 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.94s 2026-04-07 03:50:21.362622 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.00s 2026-04-07 03:50:21.362647 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.24s 2026-04-07 03:50:21.362657 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.16s 2026-04-07 03:50:21.362688 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.04s 2026-04-07 03:50:21.362697 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.78s 2026-04-07 03:50:21.362706 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.77s 2026-04-07 03:50:21.362716 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.72s 2026-04-07 03:50:21.362725 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.56s 2026-04-07 03:50:21.362734 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.53s 2026-04-07 03:50:21.362743 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.96s 2026-04-07 03:50:21.362753 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.71s 2026-04-07 03:50:21.362793 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.62s 2026-04-07 03:50:21.362804 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.51s 2026-04-07 03:50:21.362811 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.37s 2026-04-07 03:50:21.362833 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.11s 2026-04-07 03:50:21.362840 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.67s 2026-04-07 03:50:22.125097 | orchestrator | ok: Runtime: 1:47:48.767960 2026-04-07 03:50:22.354664 | 2026-04-07 03:50:22.354798 | TASK [Deploy in a nutshell] 2026-04-07 03:50:22.890364 | orchestrator | skipping: Conditional result was False 2026-04-07 03:50:22.911959 | 2026-04-07 03:50:22.912114 | TASK [Bootstrap services] 2026-04-07 03:50:23.675257 | orchestrator | 2026-04-07 03:50:23.675483 | orchestrator | # BOOTSTRAP 2026-04-07 03:50:23.675520 | orchestrator | 2026-04-07 03:50:23.675541 | orchestrator | + set -e 2026-04-07 03:50:23.675587 | orchestrator | + echo 2026-04-07 03:50:23.675609 | orchestrator | + echo '# BOOTSTRAP' 2026-04-07 03:50:23.675631 | orchestrator | + echo 2026-04-07 03:50:23.675672 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-07 03:50:23.682685 | orchestrator | + set -e 2026-04-07 03:50:23.682791 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-07 03:50:25.987110 | orchestrator | 2026-04-07 03:50:25 | INFO  | It takes a moment until task 845aa7c3-fc88-4052-8526-abf60cf42752 (flavor-manager) has been started and output is visible here. 2026-04-07 03:50:34.909963 | orchestrator | 2026-04-07 03:50:29 | INFO  | Flavor SCS-1L-1 created 2026-04-07 03:50:34.910157 | orchestrator | 2026-04-07 03:50:30 | INFO  | Flavor SCS-1L-1-5 created 2026-04-07 03:50:34.910186 | orchestrator | 2026-04-07 03:50:30 | INFO  | Flavor SCS-1V-2 created 2026-04-07 03:50:34.910202 | orchestrator | 2026-04-07 03:50:30 | INFO  | Flavor SCS-1V-2-5 created 2026-04-07 03:50:34.910215 | orchestrator | 2026-04-07 03:50:30 | INFO  | Flavor SCS-1V-4 created 2026-04-07 03:50:34.910229 | orchestrator | 2026-04-07 03:50:30 | INFO  | Flavor SCS-1V-4-10 created 2026-04-07 03:50:34.910245 | orchestrator | 2026-04-07 03:50:31 | INFO  | Flavor SCS-1V-8 created 2026-04-07 03:50:34.910260 | orchestrator | 2026-04-07 03:50:31 | INFO  | Flavor SCS-1V-8-20 created 2026-04-07 03:50:34.910285 | orchestrator | 2026-04-07 03:50:31 | INFO  | Flavor SCS-2V-4 created 2026-04-07 03:50:34.910295 | orchestrator | 2026-04-07 03:50:31 | INFO  | Flavor SCS-2V-4-10 created 2026-04-07 03:50:34.910303 | orchestrator | 2026-04-07 03:50:31 | INFO  | Flavor SCS-2V-8 created 2026-04-07 03:50:34.910311 | orchestrator | 2026-04-07 03:50:31 | INFO  | Flavor SCS-2V-8-20 created 2026-04-07 03:50:34.910320 | orchestrator | 2026-04-07 03:50:32 | INFO  | Flavor SCS-2V-16 created 2026-04-07 03:50:34.910328 | orchestrator | 2026-04-07 03:50:32 | INFO  | Flavor SCS-2V-16-50 created 2026-04-07 03:50:34.910336 | orchestrator | 2026-04-07 03:50:32 | INFO  | Flavor SCS-4V-8 created 2026-04-07 03:50:34.910344 | orchestrator | 2026-04-07 03:50:32 | INFO  | Flavor SCS-4V-8-20 created 2026-04-07 03:50:34.910352 | orchestrator | 2026-04-07 03:50:32 | INFO  | Flavor SCS-4V-16 created 2026-04-07 03:50:34.910360 | orchestrator | 2026-04-07 03:50:32 | INFO  | Flavor SCS-4V-16-50 created 2026-04-07 03:50:34.910368 | orchestrator | 2026-04-07 03:50:33 | INFO  | Flavor SCS-4V-32 created 2026-04-07 03:50:34.910376 | orchestrator | 2026-04-07 03:50:33 | INFO  | Flavor SCS-4V-32-100 created 2026-04-07 03:50:34.910384 | orchestrator | 2026-04-07 03:50:33 | INFO  | Flavor SCS-8V-16 created 2026-04-07 03:50:34.910392 | orchestrator | 2026-04-07 03:50:33 | INFO  | Flavor SCS-8V-16-50 created 2026-04-07 03:50:34.910401 | orchestrator | 2026-04-07 03:50:33 | INFO  | Flavor SCS-8V-32 created 2026-04-07 03:50:34.910409 | orchestrator | 2026-04-07 03:50:33 | INFO  | Flavor SCS-8V-32-100 created 2026-04-07 03:50:34.910417 | orchestrator | 2026-04-07 03:50:33 | INFO  | Flavor SCS-16V-32 created 2026-04-07 03:50:34.910425 | orchestrator | 2026-04-07 03:50:34 | INFO  | Flavor SCS-16V-32-100 created 2026-04-07 03:50:34.910433 | orchestrator | 2026-04-07 03:50:34 | INFO  | Flavor SCS-2V-4-20s created 2026-04-07 03:50:34.910441 | orchestrator | 2026-04-07 03:50:34 | INFO  | Flavor SCS-4V-8-50s created 2026-04-07 03:50:34.910450 | orchestrator | 2026-04-07 03:50:34 | INFO  | Flavor SCS-8V-32-100s created 2026-04-07 03:50:37.424166 | orchestrator | 2026-04-07 03:50:37 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-07 03:50:41.125275 | orchestrator | 2026-04-07 03:50:41 | INFO  | Task ba778dbf-b059-4922-bc96-1d4b7c026ff3 (bootstrap-basic) was prepared for execution. 2026-04-07 03:50:41.125360 | orchestrator | 2026-04-07 03:50:41 | INFO  | It takes a moment until task ba778dbf-b059-4922-bc96-1d4b7c026ff3 (bootstrap-basic) has been started and output is visible here. 2026-04-07 03:51:29.979342 | orchestrator | 2026-04-07 03:51:29.979495 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-07 03:51:29.979584 | orchestrator | 2026-04-07 03:51:29.979605 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 03:51:29.979634 | orchestrator | Tuesday 07 April 2026 03:50:46 +0000 (0:00:00.083) 0:00:00.083 ********* 2026-04-07 03:51:29.979657 | orchestrator | ok: [localhost] 2026-04-07 03:51:29.979679 | orchestrator | 2026-04-07 03:51:29.979697 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-07 03:51:29.979714 | orchestrator | Tuesday 07 April 2026 03:50:48 +0000 (0:00:02.045) 0:00:02.129 ********* 2026-04-07 03:51:29.979730 | orchestrator | ok: [localhost] 2026-04-07 03:51:29.979745 | orchestrator | 2026-04-07 03:51:29.979763 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-07 03:51:29.979779 | orchestrator | Tuesday 07 April 2026 03:50:56 +0000 (0:00:08.130) 0:00:10.260 ********* 2026-04-07 03:51:29.979798 | orchestrator | changed: [localhost] 2026-04-07 03:51:29.979814 | orchestrator | 2026-04-07 03:51:29.979837 | orchestrator | TASK [Create public network] *************************************************** 2026-04-07 03:51:29.979854 | orchestrator | Tuesday 07 April 2026 03:51:03 +0000 (0:00:06.969) 0:00:17.229 ********* 2026-04-07 03:51:29.979875 | orchestrator | changed: [localhost] 2026-04-07 03:51:29.979897 | orchestrator | 2026-04-07 03:51:29.979919 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-07 03:51:29.979940 | orchestrator | Tuesday 07 April 2026 03:51:09 +0000 (0:00:05.925) 0:00:23.154 ********* 2026-04-07 03:51:29.979967 | orchestrator | changed: [localhost] 2026-04-07 03:51:29.979987 | orchestrator | 2026-04-07 03:51:29.980006 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-07 03:51:29.980025 | orchestrator | Tuesday 07 April 2026 03:51:16 +0000 (0:00:07.024) 0:00:30.179 ********* 2026-04-07 03:51:29.980043 | orchestrator | changed: [localhost] 2026-04-07 03:51:29.980065 | orchestrator | 2026-04-07 03:51:29.980085 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-07 03:51:29.980105 | orchestrator | Tuesday 07 April 2026 03:51:21 +0000 (0:00:04.885) 0:00:35.065 ********* 2026-04-07 03:51:29.980125 | orchestrator | changed: [localhost] 2026-04-07 03:51:29.980146 | orchestrator | 2026-04-07 03:51:29.980168 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-07 03:51:29.980208 | orchestrator | Tuesday 07 April 2026 03:51:25 +0000 (0:00:04.340) 0:00:39.405 ********* 2026-04-07 03:51:29.980228 | orchestrator | ok: [localhost] 2026-04-07 03:51:29.980246 | orchestrator | 2026-04-07 03:51:29.980264 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:51:29.980282 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 03:51:29.980301 | orchestrator | 2026-04-07 03:51:29.980321 | orchestrator | 2026-04-07 03:51:29.980341 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:51:29.980360 | orchestrator | Tuesday 07 April 2026 03:51:29 +0000 (0:00:03.973) 0:00:43.378 ********* 2026-04-07 03:51:29.980381 | orchestrator | =============================================================================== 2026-04-07 03:51:29.980400 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.13s 2026-04-07 03:51:29.980421 | orchestrator | Set public network to default ------------------------------------------- 7.02s 2026-04-07 03:51:29.980440 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.97s 2026-04-07 03:51:29.980459 | orchestrator | Create public network --------------------------------------------------- 5.93s 2026-04-07 03:51:29.980552 | orchestrator | Create public subnet ---------------------------------------------------- 4.89s 2026-04-07 03:51:29.980576 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.34s 2026-04-07 03:51:29.980595 | orchestrator | Create manager role ----------------------------------------------------- 3.97s 2026-04-07 03:51:29.980613 | orchestrator | Gathering Facts --------------------------------------------------------- 2.05s 2026-04-07 03:51:32.738589 | orchestrator | 2026-04-07 03:51:32 | INFO  | It takes a moment until task 1c37354d-baf1-4e3e-8f84-6593a1563dd0 (image-manager) has been started and output is visible here. 2026-04-07 03:52:17.679999 | orchestrator | 2026-04-07 03:51:35 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-07 03:52:17.680097 | orchestrator | 2026-04-07 03:51:35 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-07 03:52:17.680110 | orchestrator | 2026-04-07 03:51:35 | INFO  | Importing image Cirros 0.6.2 2026-04-07 03:52:17.680117 | orchestrator | 2026-04-07 03:51:35 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-07 03:52:17.680125 | orchestrator | 2026-04-07 03:51:38 | INFO  | Waiting for image to leave queued state... 2026-04-07 03:52:17.680132 | orchestrator | 2026-04-07 03:51:40 | INFO  | Waiting for import to complete... 2026-04-07 03:52:17.680138 | orchestrator | 2026-04-07 03:51:50 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-07 03:52:17.680145 | orchestrator | 2026-04-07 03:51:50 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-07 03:52:17.680151 | orchestrator | 2026-04-07 03:51:50 | INFO  | Setting internal_version = 0.6.2 2026-04-07 03:52:17.680157 | orchestrator | 2026-04-07 03:51:50 | INFO  | Setting image_original_user = cirros 2026-04-07 03:52:17.680163 | orchestrator | 2026-04-07 03:51:50 | INFO  | Adding tag os:cirros 2026-04-07 03:52:17.680169 | orchestrator | 2026-04-07 03:51:51 | INFO  | Setting property architecture: x86_64 2026-04-07 03:52:17.680175 | orchestrator | 2026-04-07 03:51:51 | INFO  | Setting property hw_disk_bus: scsi 2026-04-07 03:52:17.680181 | orchestrator | 2026-04-07 03:51:51 | INFO  | Setting property hw_rng_model: virtio 2026-04-07 03:52:17.680188 | orchestrator | 2026-04-07 03:51:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-07 03:52:17.680194 | orchestrator | 2026-04-07 03:51:52 | INFO  | Setting property hw_watchdog_action: reset 2026-04-07 03:52:17.680201 | orchestrator | 2026-04-07 03:51:52 | INFO  | Setting property hypervisor_type: qemu 2026-04-07 03:52:17.680208 | orchestrator | 2026-04-07 03:51:52 | INFO  | Setting property os_distro: cirros 2026-04-07 03:52:17.680214 | orchestrator | 2026-04-07 03:51:52 | INFO  | Setting property os_purpose: minimal 2026-04-07 03:52:17.680220 | orchestrator | 2026-04-07 03:51:53 | INFO  | Setting property replace_frequency: never 2026-04-07 03:52:17.680227 | orchestrator | 2026-04-07 03:51:53 | INFO  | Setting property uuid_validity: none 2026-04-07 03:52:17.680232 | orchestrator | 2026-04-07 03:51:53 | INFO  | Setting property provided_until: none 2026-04-07 03:52:17.680238 | orchestrator | 2026-04-07 03:51:54 | INFO  | Setting property image_description: Cirros 2026-04-07 03:52:17.680244 | orchestrator | 2026-04-07 03:51:54 | INFO  | Setting property image_name: Cirros 2026-04-07 03:52:17.680250 | orchestrator | 2026-04-07 03:51:54 | INFO  | Setting property internal_version: 0.6.2 2026-04-07 03:52:17.680256 | orchestrator | 2026-04-07 03:51:55 | INFO  | Setting property image_original_user: cirros 2026-04-07 03:52:17.680284 | orchestrator | 2026-04-07 03:51:55 | INFO  | Setting property os_version: 0.6.2 2026-04-07 03:52:17.680299 | orchestrator | 2026-04-07 03:51:55 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-07 03:52:17.680307 | orchestrator | 2026-04-07 03:51:55 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-07 03:52:17.680313 | orchestrator | 2026-04-07 03:51:56 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-07 03:52:17.680319 | orchestrator | 2026-04-07 03:51:56 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-07 03:52:17.680324 | orchestrator | 2026-04-07 03:51:56 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-07 03:52:17.680331 | orchestrator | 2026-04-07 03:51:56 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-07 03:52:17.680341 | orchestrator | 2026-04-07 03:51:56 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-07 03:52:17.680348 | orchestrator | 2026-04-07 03:51:56 | INFO  | Importing image Cirros 0.6.3 2026-04-07 03:52:17.680354 | orchestrator | 2026-04-07 03:51:56 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-07 03:52:17.680360 | orchestrator | 2026-04-07 03:51:58 | INFO  | Waiting for image to leave queued state... 2026-04-07 03:52:17.680366 | orchestrator | 2026-04-07 03:52:00 | INFO  | Waiting for import to complete... 2026-04-07 03:52:17.680388 | orchestrator | 2026-04-07 03:52:10 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-07 03:52:17.680394 | orchestrator | 2026-04-07 03:52:11 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-07 03:52:17.680413 | orchestrator | 2026-04-07 03:52:11 | INFO  | Setting internal_version = 0.6.3 2026-04-07 03:52:17.680419 | orchestrator | 2026-04-07 03:52:11 | INFO  | Setting image_original_user = cirros 2026-04-07 03:52:17.680425 | orchestrator | 2026-04-07 03:52:11 | INFO  | Adding tag os:cirros 2026-04-07 03:52:17.680437 | orchestrator | 2026-04-07 03:52:11 | INFO  | Setting property architecture: x86_64 2026-04-07 03:52:17.680443 | orchestrator | 2026-04-07 03:52:11 | INFO  | Setting property hw_disk_bus: scsi 2026-04-07 03:52:17.680449 | orchestrator | 2026-04-07 03:52:12 | INFO  | Setting property hw_rng_model: virtio 2026-04-07 03:52:17.680455 | orchestrator | 2026-04-07 03:52:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-07 03:52:17.680460 | orchestrator | 2026-04-07 03:52:12 | INFO  | Setting property hw_watchdog_action: reset 2026-04-07 03:52:17.680466 | orchestrator | 2026-04-07 03:52:13 | INFO  | Setting property hypervisor_type: qemu 2026-04-07 03:52:17.680510 | orchestrator | 2026-04-07 03:52:13 | INFO  | Setting property os_distro: cirros 2026-04-07 03:52:17.680517 | orchestrator | 2026-04-07 03:52:13 | INFO  | Setting property os_purpose: minimal 2026-04-07 03:52:17.680523 | orchestrator | 2026-04-07 03:52:13 | INFO  | Setting property replace_frequency: never 2026-04-07 03:52:17.680529 | orchestrator | 2026-04-07 03:52:14 | INFO  | Setting property uuid_validity: none 2026-04-07 03:52:17.680535 | orchestrator | 2026-04-07 03:52:14 | INFO  | Setting property provided_until: none 2026-04-07 03:52:17.680542 | orchestrator | 2026-04-07 03:52:14 | INFO  | Setting property image_description: Cirros 2026-04-07 03:52:17.680548 | orchestrator | 2026-04-07 03:52:14 | INFO  | Setting property image_name: Cirros 2026-04-07 03:52:17.680555 | orchestrator | 2026-04-07 03:52:15 | INFO  | Setting property internal_version: 0.6.3 2026-04-07 03:52:17.680569 | orchestrator | 2026-04-07 03:52:15 | INFO  | Setting property image_original_user: cirros 2026-04-07 03:52:17.680576 | orchestrator | 2026-04-07 03:52:15 | INFO  | Setting property os_version: 0.6.3 2026-04-07 03:52:17.680582 | orchestrator | 2026-04-07 03:52:16 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-07 03:52:17.680589 | orchestrator | 2026-04-07 03:52:16 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-07 03:52:17.680596 | orchestrator | 2026-04-07 03:52:16 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-07 03:52:17.680603 | orchestrator | 2026-04-07 03:52:16 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-07 03:52:17.680610 | orchestrator | 2026-04-07 03:52:16 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-07 03:52:18.042465 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-07 03:52:20.466008 | orchestrator | 2026-04-07 03:52:20 | INFO  | date: 2026-04-07 2026-04-07 03:52:20.466120 | orchestrator | 2026-04-07 03:52:20 | INFO  | image: octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-07 03:52:20.466145 | orchestrator | 2026-04-07 03:52:20 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-07 03:52:20.466153 | orchestrator | 2026-04-07 03:52:20 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2.CHECKSUM 2026-04-07 03:52:20.614992 | orchestrator | 2026-04-07 03:52:20 | INFO  | checksum: c4f8130b9b88752cd3a30f3b2f025c70b2421aeafd1894491d662bda8fc15d00 2026-04-07 03:52:20.699915 | orchestrator | 2026-04-07 03:52:20 | INFO  | It takes a moment until task 5e73fba5-d30e-4c4d-993a-e0ed0c8be1a2 (image-manager) has been started and output is visible here. 2026-04-07 03:53:34.592298 | orchestrator | 2026-04-07 03:52:23 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-07' 2026-04-07 03:53:34.592509 | orchestrator | 2026-04-07 03:52:23 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2: 200 2026-04-07 03:53:34.592530 | orchestrator | 2026-04-07 03:52:23 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-07 2026-04-07 03:53:34.592539 | orchestrator | 2026-04-07 03:52:23 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-07 03:53:34.592549 | orchestrator | 2026-04-07 03:52:24 | INFO  | Waiting for image to leave queued state... 2026-04-07 03:53:34.592557 | orchestrator | 2026-04-07 03:52:26 | INFO  | Waiting for import to complete... 2026-04-07 03:53:34.592566 | orchestrator | 2026-04-07 03:52:37 | INFO  | Waiting for import to complete... 2026-04-07 03:53:34.592574 | orchestrator | 2026-04-07 03:52:47 | INFO  | Waiting for import to complete... 2026-04-07 03:53:34.592582 | orchestrator | 2026-04-07 03:52:57 | INFO  | Waiting for import to complete... 2026-04-07 03:53:34.592592 | orchestrator | 2026-04-07 03:53:07 | INFO  | Waiting for import to complete... 2026-04-07 03:53:34.592601 | orchestrator | 2026-04-07 03:53:17 | INFO  | Waiting for import to complete... 2026-04-07 03:53:34.592609 | orchestrator | 2026-04-07 03:53:27 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-07' successfully completed, reloading images 2026-04-07 03:53:34.592618 | orchestrator | 2026-04-07 03:53:28 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-07 03:53:34.592648 | orchestrator | 2026-04-07 03:53:28 | INFO  | Setting internal_version = 2026-04-07 2026-04-07 03:53:34.592657 | orchestrator | 2026-04-07 03:53:28 | INFO  | Setting image_original_user = ubuntu 2026-04-07 03:53:34.592665 | orchestrator | 2026-04-07 03:53:28 | INFO  | Adding tag amphora 2026-04-07 03:53:34.592673 | orchestrator | 2026-04-07 03:53:28 | INFO  | Adding tag os:ubuntu 2026-04-07 03:53:34.592681 | orchestrator | 2026-04-07 03:53:28 | INFO  | Setting property architecture: x86_64 2026-04-07 03:53:34.592689 | orchestrator | 2026-04-07 03:53:29 | INFO  | Setting property hw_disk_bus: scsi 2026-04-07 03:53:34.592696 | orchestrator | 2026-04-07 03:53:29 | INFO  | Setting property hw_rng_model: virtio 2026-04-07 03:53:34.592704 | orchestrator | 2026-04-07 03:53:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-07 03:53:34.592712 | orchestrator | 2026-04-07 03:53:30 | INFO  | Setting property hw_watchdog_action: reset 2026-04-07 03:53:34.592720 | orchestrator | 2026-04-07 03:53:30 | INFO  | Setting property hypervisor_type: qemu 2026-04-07 03:53:34.592728 | orchestrator | 2026-04-07 03:53:30 | INFO  | Setting property os_distro: ubuntu 2026-04-07 03:53:34.592736 | orchestrator | 2026-04-07 03:53:30 | INFO  | Setting property replace_frequency: quarterly 2026-04-07 03:53:34.592744 | orchestrator | 2026-04-07 03:53:31 | INFO  | Setting property uuid_validity: last-1 2026-04-07 03:53:34.592751 | orchestrator | 2026-04-07 03:53:31 | INFO  | Setting property provided_until: none 2026-04-07 03:53:34.592759 | orchestrator | 2026-04-07 03:53:31 | INFO  | Setting property os_purpose: network 2026-04-07 03:53:34.592781 | orchestrator | 2026-04-07 03:53:32 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-07 03:53:34.592790 | orchestrator | 2026-04-07 03:53:32 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-07 03:53:34.592798 | orchestrator | 2026-04-07 03:53:32 | INFO  | Setting property internal_version: 2026-04-07 2026-04-07 03:53:34.592806 | orchestrator | 2026-04-07 03:53:32 | INFO  | Setting property image_original_user: ubuntu 2026-04-07 03:53:34.592814 | orchestrator | 2026-04-07 03:53:33 | INFO  | Setting property os_version: 2026-04-07 2026-04-07 03:53:34.592824 | orchestrator | 2026-04-07 03:53:33 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-07 03:53:34.592834 | orchestrator | 2026-04-07 03:53:33 | INFO  | Setting property image_build_date: 2026-04-07 2026-04-07 03:53:34.592843 | orchestrator | 2026-04-07 03:53:34 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-07 03:53:34.592855 | orchestrator | 2026-04-07 03:53:34 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-07 03:53:34.592890 | orchestrator | 2026-04-07 03:53:34 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-07 03:53:34.592905 | orchestrator | 2026-04-07 03:53:34 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-07 03:53:34.592919 | orchestrator | 2026-04-07 03:53:34 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-07 03:53:34.592933 | orchestrator | 2026-04-07 03:53:34 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-07 03:53:35.102314 | orchestrator | ok: Runtime: 0:03:11.747629 2026-04-07 03:53:35.117704 | 2026-04-07 03:53:35.117871 | TASK [Run checks] 2026-04-07 03:53:35.853635 | orchestrator | + set -e 2026-04-07 03:53:35.853799 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 03:53:35.853816 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 03:53:35.853829 | orchestrator | ++ INTERACTIVE=false 2026-04-07 03:53:35.853837 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 03:53:35.853844 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 03:53:35.853853 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-07 03:53:35.854066 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-07 03:53:35.858942 | orchestrator | 2026-04-07 03:53:35.859050 | orchestrator | # CHECK 2026-04-07 03:53:35.859065 | orchestrator | 2026-04-07 03:53:35.859078 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 03:53:35.859093 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 03:53:35.859104 | orchestrator | + echo 2026-04-07 03:53:35.859114 | orchestrator | + echo '# CHECK' 2026-04-07 03:53:35.859124 | orchestrator | + echo 2026-04-07 03:53:35.859138 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-07 03:53:35.859278 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-07 03:53:35.915950 | orchestrator | 2026-04-07 03:53:35.916057 | orchestrator | ## Containers @ testbed-manager 2026-04-07 03:53:35.916072 | orchestrator | 2026-04-07 03:53:35.916085 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-07 03:53:35.916095 | orchestrator | + echo 2026-04-07 03:53:35.916106 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-07 03:53:35.916143 | orchestrator | + echo 2026-04-07 03:53:35.916154 | orchestrator | + osism container testbed-manager ps 2026-04-07 03:53:38.183299 | orchestrator | 2026-04-07 03:53:38 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-07 03:53:38.567276 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-07 03:53:38.567516 | orchestrator | dccdf312ce90 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-04-07 03:53:38.567566 | orchestrator | 5b644ea12189 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-04-07 03:53:38.567592 | orchestrator | 1a3487866ea1 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-07 03:53:38.567604 | orchestrator | 25b232151f85 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-07 03:53:38.567616 | orchestrator | 8e0b79facfa8 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-04-07 03:53:38.567634 | orchestrator | 808aa9d713b0 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up About an hour cephclient 2026-04-07 03:53:38.567647 | orchestrator | 83821cd33ba1 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-07 03:53:38.567677 | orchestrator | a4c9e26ad0c3 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-07 03:53:38.567717 | orchestrator | 489ed1e44ca0 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-07 03:53:38.567730 | orchestrator | c17a398a1344 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-04-07 03:53:38.567741 | orchestrator | 0e00e1d2444a phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-04-07 03:53:38.567753 | orchestrator | 2d35172a42cf registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-04-07 03:53:38.567765 | orchestrator | 5fdba8a66803 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-04-07 03:53:38.567777 | orchestrator | 821c32323ece registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-07 03:53:38.567794 | orchestrator | 081920ea5d7a registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-04-07 03:53:38.567807 | orchestrator | 228e01f8409b registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-04-07 03:53:38.567818 | orchestrator | bd0987ef946e registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-04-07 03:53:38.567830 | orchestrator | 89c8d4a89944 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-04-07 03:53:38.567842 | orchestrator | a45c908e8e51 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-04-07 03:53:38.567862 | orchestrator | bfe428beaf69 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-07 03:53:38.567880 | orchestrator | 01fc5d83ea22 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-04-07 03:53:38.567898 | orchestrator | f10af11c75e7 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-04-07 03:53:38.567930 | orchestrator | 5dbf39ed627f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-07 03:53:38.567965 | orchestrator | a9c7dce9fbec registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-04-07 03:53:38.567986 | orchestrator | aa0a8e824f02 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-07 03:53:38.568005 | orchestrator | d5738a294292 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-07 03:53:38.568024 | orchestrator | 1b136ca8599c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-04-07 03:53:38.568042 | orchestrator | 070459d75ece registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-04-07 03:53:38.568073 | orchestrator | 4f31b76e1178 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-04-07 03:53:38.568092 | orchestrator | 0b39209d4c60 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-07 03:53:38.969646 | orchestrator | 2026-04-07 03:53:38.969735 | orchestrator | ## Images @ testbed-manager 2026-04-07 03:53:38.969746 | orchestrator | 2026-04-07 03:53:38.969753 | orchestrator | + echo 2026-04-07 03:53:38.969761 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-07 03:53:38.969768 | orchestrator | + echo 2026-04-07 03:53:38.969779 | orchestrator | + osism container testbed-manager images 2026-04-07 03:53:41.680124 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-07 03:53:41.680244 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 9b17f7d3fea3 24 hours ago 239MB 2026-04-07 03:53:41.680261 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-07 03:53:41.680281 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-07 03:53:41.680308 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-07 03:53:41.680334 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-07 03:53:41.680353 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-07 03:53:41.680370 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-07 03:53:41.680388 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-07 03:53:41.680475 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-07 03:53:41.680521 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-07 03:53:41.680539 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-07 03:53:41.680556 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-07 03:53:41.680591 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-07 03:53:41.680624 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-07 03:53:41.680643 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-07 03:53:41.680663 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-07 03:53:41.680682 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-07 03:53:41.680700 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-07 03:53:41.680717 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-07 03:53:41.680728 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-07 03:53:41.680739 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-07 03:53:41.680750 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-07 03:53:41.680761 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-07 03:53:41.680772 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-07 03:53:41.680785 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-07 03:53:42.081600 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-07 03:53:42.081690 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-07 03:53:42.122712 | orchestrator | 2026-04-07 03:53:42.122843 | orchestrator | ## Containers @ testbed-node-0 2026-04-07 03:53:42.122868 | orchestrator | 2026-04-07 03:53:42.122884 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-07 03:53:42.122899 | orchestrator | + echo 2026-04-07 03:53:42.122914 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-07 03:53:42.122931 | orchestrator | + echo 2026-04-07 03:53:42.122947 | orchestrator | + osism container testbed-node-0 ps 2026-04-07 03:53:44.721703 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-07 03:53:44.721821 | orchestrator | 0b30c54a375b registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-07 03:53:44.721844 | orchestrator | e9cfacab5a86 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-07 03:53:44.721858 | orchestrator | e182302057a8 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-07 03:53:44.721869 | orchestrator | da96e54fd9a9 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-07 03:53:44.721901 | orchestrator | c6f9e5d94c85 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-07 03:53:44.721912 | orchestrator | fb42c7fe4931 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-07 03:53:44.721945 | orchestrator | 84fd7dd86ea7 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-07 03:53:44.721956 | orchestrator | b29cbea7cfa4 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-07 03:53:44.721976 | orchestrator | dd35054a4061 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-07 03:53:44.721987 | orchestrator | 043abdb4bcf3 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-07 03:53:44.721997 | orchestrator | 9e2d568cd8ff registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-07 03:53:44.722007 | orchestrator | c10d28f6b0bc registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-07 03:53:44.722145 | orchestrator | b599acd12651 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-07 03:53:44.722173 | orchestrator | f3ca05a8cb94 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-07 03:53:44.722188 | orchestrator | f7bd919bcfa5 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-07 03:53:44.722204 | orchestrator | ffd7011bc6a9 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-07 03:53:44.722230 | orchestrator | 1bb6718ae7dd registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-07 03:53:44.722248 | orchestrator | 253b10759b9d registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-07 03:53:44.722277 | orchestrator | a9839e7b2cf8 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-07 03:53:44.722322 | orchestrator | 7dcc47212e40 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-07 03:53:44.722335 | orchestrator | de4e2e1aeb13 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-07 03:53:44.722345 | orchestrator | c861a14bb21f registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-07 03:53:44.722366 | orchestrator | 3353962e68f4 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-04-07 03:53:44.722376 | orchestrator | b3b64feff3f3 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-07 03:53:44.722386 | orchestrator | 81748e966762 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-07 03:53:44.722473 | orchestrator | 8c68bbd8e3a8 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-07 03:53:44.722488 | orchestrator | bd2bdc35136b registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-07 03:53:44.722498 | orchestrator | f4721290e537 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-07 03:53:44.722508 | orchestrator | eedc3b1e0598 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-07 03:53:44.722518 | orchestrator | 13ada10071f4 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-07 03:53:44.722528 | orchestrator | ba984fb37415 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-07 03:53:44.722538 | orchestrator | 3b554e74e3fd registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-07 03:53:44.722548 | orchestrator | 037dfa916e82 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-07 03:53:44.722558 | orchestrator | 5ad601b6d1e7 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-07 03:53:44.722568 | orchestrator | e23959bb0144 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-07 03:53:44.723178 | orchestrator | 512fca91e35e registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-07 03:53:44.723246 | orchestrator | de8483197b34 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-07 03:53:44.723263 | orchestrator | b92e169f2bbc registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-07 03:53:44.723288 | orchestrator | 3aef8a67ec4f registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-07 03:53:44.723313 | orchestrator | d96c9c60b685 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) horizon 2026-04-07 03:53:44.723323 | orchestrator | 9017f8b402cd registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-07 03:53:44.723334 | orchestrator | 8c45e2e8f6b6 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-07 03:53:44.723344 | orchestrator | de966cf522cb registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_api 2026-04-07 03:53:44.723354 | orchestrator | 54bb94442acb registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-07 03:53:44.723363 | orchestrator | 515590a8a729 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-07 03:53:44.723373 | orchestrator | d63c75f06447 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) placement_api 2026-04-07 03:53:44.723383 | orchestrator | 9dd43d3cec96 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone 2026-04-07 03:53:44.723392 | orchestrator | 220369d7e0af registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-07 03:53:44.723462 | orchestrator | bac7d3279dd8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone_ssh 2026-04-07 03:53:44.723485 | orchestrator | 6fcb04cc2e9e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-0 2026-04-07 03:53:44.723500 | orchestrator | 4bb8ef2be51d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-07 03:53:44.723524 | orchestrator | 4cd0634997ff registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-04-07 03:53:44.723540 | orchestrator | 83f461eee4ea registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-07 03:53:44.723557 | orchestrator | 7ff3035c3bf8 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-07 03:53:44.723573 | orchestrator | 882978998c8c registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-07 03:53:44.723590 | orchestrator | d4ab00f9ffa6 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-07 03:53:44.723627 | orchestrator | fa95c5c920f7 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-07 03:53:44.723648 | orchestrator | e06d7df049d9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-07 03:53:44.723658 | orchestrator | 247f54ebab6f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-07 03:53:44.723672 | orchestrator | 193e456a9cd9 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-07 03:53:44.723688 | orchestrator | 1a7207520dba registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-07 03:53:44.723705 | orchestrator | af3961e291b0 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-07 03:53:44.723720 | orchestrator | 87e9632649a4 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-07 03:53:44.723735 | orchestrator | 2b92dd9a55ea registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-07 03:53:44.723749 | orchestrator | 7033d2cc6f7f registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-07 03:53:44.723763 | orchestrator | c69e33a2997e registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-07 03:53:44.723780 | orchestrator | 0a1c7a8bdcee registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-07 03:53:44.723797 | orchestrator | 0826571b8ebe registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-07 03:53:44.723814 | orchestrator | 6976c01ab92e registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-07 03:53:44.723830 | orchestrator | 582ab0847267 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-07 03:53:44.723847 | orchestrator | df602b8eac1e registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-07 03:53:45.126785 | orchestrator | 2026-04-07 03:53:45.126879 | orchestrator | ## Images @ testbed-node-0 2026-04-07 03:53:45.126889 | orchestrator | 2026-04-07 03:53:45.126896 | orchestrator | + echo 2026-04-07 03:53:45.126902 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-07 03:53:45.126908 | orchestrator | + echo 2026-04-07 03:53:45.126914 | orchestrator | + osism container testbed-node-0 images 2026-04-07 03:53:47.756309 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-07 03:53:47.756481 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-07 03:53:47.756508 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-07 03:53:47.756528 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-07 03:53:47.756575 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-07 03:53:47.756590 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-07 03:53:47.756601 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-07 03:53:47.756612 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-07 03:53:47.756623 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-07 03:53:47.756637 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-07 03:53:47.756656 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-07 03:53:47.756673 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-07 03:53:47.756690 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-07 03:53:47.756707 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-07 03:53:47.756724 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-07 03:53:47.756741 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-07 03:53:47.756757 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-07 03:53:47.756774 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-07 03:53:47.756807 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-07 03:53:47.756825 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-07 03:53:47.756841 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-07 03:53:47.756857 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-07 03:53:47.756873 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-07 03:53:47.756891 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-07 03:53:47.756909 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-07 03:53:47.756934 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-07 03:53:47.756953 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-07 03:53:47.756970 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-07 03:53:47.756989 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-07 03:53:47.757007 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-07 03:53:47.757042 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-07 03:53:47.757060 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-07 03:53:47.757105 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-07 03:53:47.757126 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-07 03:53:47.757143 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-07 03:53:47.757160 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-07 03:53:47.757177 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-07 03:53:47.757194 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-07 03:53:47.757210 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-07 03:53:47.757225 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-07 03:53:47.757242 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-07 03:53:47.757261 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-07 03:53:47.757279 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-07 03:53:47.757296 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-07 03:53:47.757314 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-07 03:53:47.757332 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-07 03:53:47.757350 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-07 03:53:47.757378 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-07 03:53:47.757469 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-07 03:53:47.757492 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-07 03:53:47.757511 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-07 03:53:47.757527 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-07 03:53:47.757543 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-07 03:53:47.757559 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-07 03:53:47.757575 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-07 03:53:47.757591 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-07 03:53:47.757622 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-07 03:53:47.757639 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-07 03:53:47.757657 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-07 03:53:47.757675 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-07 03:53:47.757692 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-07 03:53:47.757710 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-07 03:53:47.757728 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-07 03:53:47.757745 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-07 03:53:47.757781 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-07 03:53:47.757800 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-07 03:53:47.757818 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-07 03:53:47.757834 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-07 03:53:47.757851 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-07 03:53:47.757868 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-07 03:53:48.187429 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-07 03:53:48.188291 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-07 03:53:48.247531 | orchestrator | 2026-04-07 03:53:48.247599 | orchestrator | ## Containers @ testbed-node-1 2026-04-07 03:53:48.247610 | orchestrator | 2026-04-07 03:53:48.247615 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-07 03:53:48.247620 | orchestrator | + echo 2026-04-07 03:53:48.247625 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-07 03:53:48.247630 | orchestrator | + echo 2026-04-07 03:53:48.247635 | orchestrator | + osism container testbed-node-1 ps 2026-04-07 03:53:50.960965 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-07 03:53:50.961073 | orchestrator | f4a0a446f6d5 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-07 03:53:50.961091 | orchestrator | 1082d315bf1b registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-07 03:53:50.961105 | orchestrator | 6aa7e7eebe7f registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-07 03:53:50.961163 | orchestrator | 64b7e347f143 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-07 03:53:50.961201 | orchestrator | 081345a00cf9 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-07 03:53:50.961238 | orchestrator | e539e2b9e069 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-07 03:53:50.961280 | orchestrator | 01a0dcc45748 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-07 03:53:50.961299 | orchestrator | cc9c6ba2eda3 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-07 03:53:50.961312 | orchestrator | 5be370d320dc registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-07 03:53:50.961325 | orchestrator | d59d09b5ed63 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-07 03:53:50.961338 | orchestrator | aa3d7c57fa4a registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-07 03:53:50.961351 | orchestrator | d54b70c6bb7a registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-07 03:53:50.961669 | orchestrator | 9a60423e2cc6 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-07 03:53:50.961687 | orchestrator | 4eb9b72e4619 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-07 03:53:50.961699 | orchestrator | 9d5c88164a39 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-07 03:53:50.961711 | orchestrator | 2f500b204a6b registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-04-07 03:53:50.961723 | orchestrator | b157b9e28b36 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-07 03:53:50.961735 | orchestrator | d4149ea18350 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-07 03:53:50.961747 | orchestrator | 8656655771bf registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-07 03:53:50.961759 | orchestrator | 9a09a4ec617a registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-07 03:53:50.961771 | orchestrator | 6d4d76db44ef registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-07 03:53:50.961784 | orchestrator | f794c9022ab5 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-07 03:53:50.961796 | orchestrator | cc4a49e4139f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) octavia_api 2026-04-07 03:53:50.961821 | orchestrator | d8a0fab97ed8 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-07 03:53:50.961833 | orchestrator | b54f130cecb3 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-07 03:53:50.961845 | orchestrator | f5f2f1fb130c registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-07 03:53:50.961858 | orchestrator | b70237e2002d registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-07 03:53:50.961928 | orchestrator | aa2070489325 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-07 03:53:50.961941 | orchestrator | c9e3fa02ca5f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-07 03:53:50.961954 | orchestrator | 014ffed54e47 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-07 03:53:50.961966 | orchestrator | cdff97794d43 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-07 03:53:50.961985 | orchestrator | c58e9702ac51 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-07 03:53:50.962071 | orchestrator | 35ba33501956 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-07 03:53:50.962090 | orchestrator | e0cc04f92b12 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-07 03:53:50.962103 | orchestrator | a28f42548661 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-07 03:53:50.962116 | orchestrator | 7322254b4b85 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-07 03:53:50.962128 | orchestrator | 75bff4c14bf7 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-07 03:53:50.962140 | orchestrator | 5d694c05f1b9 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-07 03:53:50.962153 | orchestrator | d15d297b5153 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-07 03:53:50.962165 | orchestrator | f2afbde1aa52 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-07 03:53:50.962178 | orchestrator | 91e18fa64c52 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-07 03:53:50.962201 | orchestrator | 7ee128790aea registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-07 03:53:50.962213 | orchestrator | 92fbc6ec9300 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_api 2026-04-07 03:53:50.962225 | orchestrator | 1bb49edab813 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-07 03:53:50.962237 | orchestrator | 7deeb07901c9 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-07 03:53:50.962249 | orchestrator | 39cfd08da2dc registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) placement_api 2026-04-07 03:53:50.962262 | orchestrator | 97f649c8ff2e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone 2026-04-07 03:53:50.962274 | orchestrator | 9a15f380beee registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-07 03:53:50.962287 | orchestrator | 722a33c5739f registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-07 03:53:50.962301 | orchestrator | e7d3f31893c1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-1 2026-04-07 03:53:50.962314 | orchestrator | a83bf82c9863 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-07 03:53:50.962328 | orchestrator | e8d9f46c7c23 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-04-07 03:53:50.962341 | orchestrator | 3a9304f310c7 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-07 03:53:50.962364 | orchestrator | 25cb08f49410 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-07 03:53:50.962384 | orchestrator | b4a5a70be1d0 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-07 03:53:50.962492 | orchestrator | ba0a259efb28 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-07 03:53:50.962512 | orchestrator | 04d4bab6d098 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-07 03:53:50.962524 | orchestrator | 2b9e713b83c6 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-07 03:53:50.962544 | orchestrator | 2114d71c5ce6 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-07 03:53:50.962556 | orchestrator | 12799607dba8 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-07 03:53:50.962567 | orchestrator | aa6a84e82aa0 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-07 03:53:50.962580 | orchestrator | 33181b4aa13d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-07 03:53:50.962592 | orchestrator | 366abd3d1237 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-07 03:53:50.962604 | orchestrator | 96426da70682 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-07 03:53:50.962615 | orchestrator | f4c96749f8e7 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-07 03:53:50.962627 | orchestrator | b1a740be1cad registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-07 03:53:50.962639 | orchestrator | 7533b2837a6e registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-07 03:53:50.962651 | orchestrator | 7b940a1265ad registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-07 03:53:50.962664 | orchestrator | 429b3e9addbc registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-07 03:53:50.962676 | orchestrator | e1728d45e3d1 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-07 03:53:50.962689 | orchestrator | 0b6dfdec2edc registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-07 03:53:51.378971 | orchestrator | 2026-04-07 03:53:51.379069 | orchestrator | ## Images @ testbed-node-1 2026-04-07 03:53:51.379080 | orchestrator | 2026-04-07 03:53:51.379117 | orchestrator | + echo 2026-04-07 03:53:51.379125 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-07 03:53:51.379132 | orchestrator | + echo 2026-04-07 03:53:51.379139 | orchestrator | + osism container testbed-node-1 images 2026-04-07 03:53:54.042481 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-07 03:53:54.042572 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-07 03:53:54.042583 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-07 03:53:54.042592 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-07 03:53:54.042600 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-07 03:53:54.042608 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-07 03:53:54.042636 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-07 03:53:54.042644 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-07 03:53:54.042651 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-07 03:53:54.042661 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-07 03:53:54.042673 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-07 03:53:54.042686 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-07 03:53:54.042697 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-07 03:53:54.042709 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-07 03:53:54.042720 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-07 03:53:54.042731 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-07 03:53:54.042743 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-07 03:53:54.042756 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-07 03:53:54.042769 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-07 03:53:54.042782 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-07 03:53:54.042806 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-07 03:53:54.042815 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-07 03:53:54.042822 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-07 03:53:54.042830 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-07 03:53:54.042837 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-07 03:53:54.042845 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-07 03:53:54.042852 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-07 03:53:54.042864 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-07 03:53:54.042872 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-07 03:53:54.042879 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-07 03:53:54.042886 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-07 03:53:54.042893 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-07 03:53:54.042961 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-07 03:53:54.042969 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-07 03:53:54.042976 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-07 03:53:54.042984 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-07 03:53:54.042991 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-07 03:53:54.042999 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-07 03:53:54.043006 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-07 03:53:54.043013 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-07 03:53:54.043020 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-07 03:53:54.043028 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-07 03:53:54.043035 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-07 03:53:54.043042 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-07 03:53:54.043049 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-07 03:53:54.043057 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-07 03:53:54.043064 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-07 03:53:54.043071 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-07 03:53:54.043079 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-07 03:53:54.043086 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-07 03:53:54.043093 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-07 03:53:54.043100 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-07 03:53:54.043108 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-07 03:53:54.043115 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-07 03:53:54.043122 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-07 03:53:54.043129 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-07 03:53:54.043137 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-07 03:53:54.043144 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-07 03:53:54.043156 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-07 03:53:54.043163 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-07 03:53:54.043171 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-07 03:53:54.043181 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-07 03:53:54.043192 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-07 03:53:54.043203 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-07 03:53:54.043219 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-07 03:53:54.043230 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-07 03:53:54.043240 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-07 03:53:54.043251 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-07 03:53:54.043262 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-07 03:53:54.043273 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-07 03:53:54.471624 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-07 03:53:54.472321 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-07 03:53:54.538686 | orchestrator | 2026-04-07 03:53:54.538759 | orchestrator | ## Containers @ testbed-node-2 2026-04-07 03:53:54.538768 | orchestrator | 2026-04-07 03:53:54.538775 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-07 03:53:54.538781 | orchestrator | + echo 2026-04-07 03:53:54.538787 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-07 03:53:54.538794 | orchestrator | + echo 2026-04-07 03:53:54.538800 | orchestrator | + osism container testbed-node-2 ps 2026-04-07 03:53:57.270120 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-07 03:53:57.270277 | orchestrator | eab0daf3c238 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-07 03:53:57.270296 | orchestrator | 48c228fa170a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-07 03:53:57.270308 | orchestrator | d281c843e324 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-07 03:53:57.270320 | orchestrator | 54ec4bda335f registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-07 03:53:57.270334 | orchestrator | 2a351782cbb4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-07 03:53:57.270345 | orchestrator | 7c4485d948e6 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-07 03:53:57.270364 | orchestrator | ea0b1d9e53b6 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-07 03:53:57.270482 | orchestrator | db346238276c registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-07 03:53:57.270507 | orchestrator | 2cbcda3f7eb5 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) manila_share 2026-04-07 03:53:57.270527 | orchestrator | 99f846d233d6 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-07 03:53:57.270546 | orchestrator | 7df65d3c40c1 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-07 03:53:57.270575 | orchestrator | 325abb0a1a8c registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-07 03:53:57.270596 | orchestrator | cb503b97cff6 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-07 03:53:57.270612 | orchestrator | 018d04bc46c3 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-07 03:53:57.270625 | orchestrator | a511bd4464d2 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-07 03:53:57.270639 | orchestrator | 17a269f63aa2 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-04-07 03:53:57.270651 | orchestrator | 4e889c69bc69 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-07 03:53:57.270664 | orchestrator | 06d4fae08a66 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-07 03:53:57.270678 | orchestrator | 0b359aaa16b2 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-07 03:53:57.270714 | orchestrator | 46412ed2c863 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-07 03:53:57.270741 | orchestrator | f7ce1a357931 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-07 03:53:57.270761 | orchestrator | a9903aabd1cb registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-07 03:53:57.270779 | orchestrator | 6c35357c10a2 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-04-07 03:53:57.270797 | orchestrator | 0e7e9c10f839 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-07 03:53:57.270830 | orchestrator | 636478c9040f registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-07 03:53:57.270849 | orchestrator | 1fafffbf1e30 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_producer 2026-04-07 03:53:57.270868 | orchestrator | d3c7298bb735 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-07 03:53:57.270888 | orchestrator | 42f3528f86db registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-07 03:53:57.270908 | orchestrator | d10d1a9d9cec registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-07 03:53:57.270928 | orchestrator | 522021bdfae8 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-07 03:53:57.270948 | orchestrator | a4e14d15496c registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-07 03:53:57.270967 | orchestrator | ce5f60fc9f43 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-07 03:53:57.270987 | orchestrator | 635f99165e3a registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-07 03:53:57.271006 | orchestrator | e0973e7ee586 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-07 03:53:57.271024 | orchestrator | 05b5c51e8484 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-07 03:53:57.271041 | orchestrator | 7027f19ba048 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-07 03:53:57.271059 | orchestrator | 361d1a8e1eb4 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-07 03:53:57.271077 | orchestrator | 2cf8f55e546f registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-07 03:53:57.271105 | orchestrator | 7b9928586d06 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-07 03:53:57.271143 | orchestrator | 8ac7713528ff registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 40 minutes ago Up 39 minutes (healthy) horizon 2026-04-07 03:53:57.271162 | orchestrator | e0da0e2c7145 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-07 03:53:57.271174 | orchestrator | a6bf6df49a37 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-07 03:53:57.271193 | orchestrator | c9893dd8c82c registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_api 2026-04-07 03:53:57.271205 | orchestrator | 050ef441e52a registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-07 03:53:57.271216 | orchestrator | df47adbd983a registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-07 03:53:57.271227 | orchestrator | 657af79ee5b9 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) placement_api 2026-04-07 03:53:57.271238 | orchestrator | ad274b128919 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone 2026-04-07 03:53:57.271249 | orchestrator | 7b699f0de91c registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-07 03:53:57.271260 | orchestrator | 4622d17ecb65 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-07 03:53:57.271271 | orchestrator | 29bc6a61c29c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-2 2026-04-07 03:53:57.271282 | orchestrator | 8620913e6e8d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-07 03:53:57.271293 | orchestrator | f4f6ca89ad43 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-04-07 03:53:57.271309 | orchestrator | 36390f1a0bc8 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-07 03:53:57.271321 | orchestrator | 185ab092d4a2 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-07 03:53:57.271331 | orchestrator | 46f1c83effd0 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-07 03:53:57.271342 | orchestrator | d451e2ddf7d9 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-07 03:53:57.271353 | orchestrator | 293d65096d77 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-07 03:53:57.271364 | orchestrator | 81dc552e0b6b registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-07 03:53:57.271375 | orchestrator | fbae7489608e registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-07 03:53:57.271457 | orchestrator | aacdd113da08 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-07 03:53:57.271479 | orchestrator | 9a960951ebcb registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-07 03:53:57.271490 | orchestrator | edfa81b88f0d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-07 03:53:57.271512 | orchestrator | 96730c145616 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-07 03:53:57.271524 | orchestrator | e1ded2fd9db0 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-07 03:53:57.271538 | orchestrator | 8d174650e70f registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-07 03:53:57.271556 | orchestrator | b484956e9d15 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-07 03:53:57.271584 | orchestrator | 4856b3d6e42f registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-07 03:53:57.271602 | orchestrator | 6df162b53b87 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-07 03:53:57.271621 | orchestrator | b64e2c9ba737 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-07 03:53:57.271638 | orchestrator | f40c526b6757 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-07 03:53:57.271656 | orchestrator | 2c055504d88d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-07 03:53:57.651844 | orchestrator | 2026-04-07 03:53:57.651944 | orchestrator | ## Images @ testbed-node-2 2026-04-07 03:53:57.651960 | orchestrator | 2026-04-07 03:53:57.651972 | orchestrator | + echo 2026-04-07 03:53:57.651984 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-07 03:53:57.652003 | orchestrator | + echo 2026-04-07 03:53:57.652021 | orchestrator | + osism container testbed-node-2 images 2026-04-07 03:54:00.256290 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-07 03:54:00.256503 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-07 03:54:00.256527 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-07 03:54:00.256542 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-07 03:54:00.256556 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-07 03:54:00.256569 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-07 03:54:00.256582 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-07 03:54:00.256594 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-07 03:54:00.256632 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-07 03:54:00.256645 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-07 03:54:00.256659 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-07 03:54:00.256677 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-07 03:54:00.256691 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-07 03:54:00.256748 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-07 03:54:00.256764 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-07 03:54:00.256795 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-07 03:54:00.256808 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-07 03:54:00.256821 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-07 03:54:00.256836 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-07 03:54:00.256848 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-07 03:54:00.256860 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-07 03:54:00.256872 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-07 03:54:00.256883 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-07 03:54:00.256897 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-07 03:54:00.256909 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-07 03:54:00.256923 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-07 03:54:00.256977 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-07 03:54:00.256991 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-07 03:54:00.257003 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-07 03:54:00.257016 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-07 03:54:00.257029 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-07 03:54:00.257042 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-07 03:54:00.257077 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-07 03:54:00.257092 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-07 03:54:00.257157 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-07 03:54:00.257173 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-07 03:54:00.257186 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-07 03:54:00.257200 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-07 03:54:00.257213 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-07 03:54:00.257225 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-07 03:54:00.257239 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-07 03:54:00.257253 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-07 03:54:00.257266 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-07 03:54:00.257279 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-07 03:54:00.257292 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-07 03:54:00.257305 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-07 03:54:00.257318 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-07 03:54:00.257332 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-07 03:54:00.257381 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-07 03:54:00.257422 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-07 03:54:00.257433 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-07 03:54:00.257447 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-07 03:54:00.257459 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-07 03:54:00.257471 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-07 03:54:00.257484 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-07 03:54:00.257497 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-07 03:54:00.257512 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-07 03:54:00.257524 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-07 03:54:00.257536 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-07 03:54:00.257549 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-07 03:54:00.257610 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-07 03:54:00.257627 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-07 03:54:00.257641 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-07 03:54:00.257655 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-07 03:54:00.257678 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-07 03:54:00.257693 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-07 03:54:00.257706 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-07 03:54:00.257719 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-07 03:54:00.257732 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-07 03:54:00.257746 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-07 03:54:00.687034 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-07 03:54:00.696122 | orchestrator | + set -e 2026-04-07 03:54:00.696237 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 03:54:00.696256 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 03:54:00.696269 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 03:54:00.696278 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 03:54:00.696287 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 03:54:00.696297 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 03:54:00.696374 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 03:54:00.696463 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 03:54:00.697339 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 03:54:00.697446 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 03:54:00.697457 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 03:54:00.697465 | orchestrator | ++ export ARA=false 2026-04-07 03:54:00.697472 | orchestrator | ++ ARA=false 2026-04-07 03:54:00.697478 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 03:54:00.697485 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 03:54:00.697491 | orchestrator | ++ export TEMPEST=false 2026-04-07 03:54:00.697498 | orchestrator | ++ TEMPEST=false 2026-04-07 03:54:00.697504 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 03:54:00.697511 | orchestrator | ++ IS_ZUUL=true 2026-04-07 03:54:00.697517 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 03:54:00.697524 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 03:54:00.697530 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 03:54:00.697537 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 03:54:00.697543 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 03:54:00.697549 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 03:54:00.697556 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 03:54:00.697563 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 03:54:00.697569 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 03:54:00.697575 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 03:54:00.697582 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-07 03:54:00.697588 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-07 03:54:00.710364 | orchestrator | + set -e 2026-04-07 03:54:00.710453 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 03:54:00.710462 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 03:54:00.710470 | orchestrator | ++ INTERACTIVE=false 2026-04-07 03:54:00.710477 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 03:54:00.710483 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 03:54:00.710490 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-07 03:54:00.711906 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-07 03:54:00.717244 | orchestrator | 2026-04-07 03:54:00.717302 | orchestrator | # Ceph status 2026-04-07 03:54:00.717310 | orchestrator | 2026-04-07 03:54:00.717317 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 03:54:00.717325 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 03:54:00.717332 | orchestrator | + echo 2026-04-07 03:54:00.717338 | orchestrator | + echo '# Ceph status' 2026-04-07 03:54:00.717345 | orchestrator | + echo 2026-04-07 03:54:00.717356 | orchestrator | + ceph -s 2026-04-07 03:54:01.402446 | orchestrator | cluster: 2026-04-07 03:54:01.402567 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-07 03:54:01.402596 | orchestrator | health: HEALTH_OK 2026-04-07 03:54:01.402616 | orchestrator | 2026-04-07 03:54:01.402636 | orchestrator | services: 2026-04-07 03:54:01.402656 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 72m) 2026-04-07 03:54:01.402675 | orchestrator | mgr: testbed-node-2(active, since 59m), standbys: testbed-node-1, testbed-node-0 2026-04-07 03:54:01.402688 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-07 03:54:01.402699 | orchestrator | osd: 6 osds: 6 up (since 68m), 6 in (since 69m) 2026-04-07 03:54:01.402711 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-07 03:54:01.402723 | orchestrator | 2026-04-07 03:54:01.402734 | orchestrator | data: 2026-04-07 03:54:01.402745 | orchestrator | volumes: 1/1 healthy 2026-04-07 03:54:01.402756 | orchestrator | pools: 14 pools, 401 pgs 2026-04-07 03:54:01.402768 | orchestrator | objects: 552 objects, 2.2 GiB 2026-04-07 03:54:01.402779 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-04-07 03:54:01.402790 | orchestrator | pgs: 401 active+clean 2026-04-07 03:54:01.402801 | orchestrator | 2026-04-07 03:54:01.465305 | orchestrator | 2026-04-07 03:54:01.465506 | orchestrator | # Ceph versions 2026-04-07 03:54:01.465537 | orchestrator | 2026-04-07 03:54:01.465587 | orchestrator | + echo 2026-04-07 03:54:01.465607 | orchestrator | + echo '# Ceph versions' 2026-04-07 03:54:01.465627 | orchestrator | + echo 2026-04-07 03:54:01.465647 | orchestrator | + ceph versions 2026-04-07 03:54:02.099595 | orchestrator | { 2026-04-07 03:54:02.099702 | orchestrator | "mon": { 2026-04-07 03:54:02.099725 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-07 03:54:02.099743 | orchestrator | }, 2026-04-07 03:54:02.099759 | orchestrator | "mgr": { 2026-04-07 03:54:02.099776 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-07 03:54:02.099793 | orchestrator | }, 2026-04-07 03:54:02.099811 | orchestrator | "osd": { 2026-04-07 03:54:02.099828 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-07 03:54:02.099844 | orchestrator | }, 2026-04-07 03:54:02.099879 | orchestrator | "mds": { 2026-04-07 03:54:02.099909 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-07 03:54:02.099927 | orchestrator | }, 2026-04-07 03:54:02.099944 | orchestrator | "rgw": { 2026-04-07 03:54:02.099961 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-07 03:54:02.099978 | orchestrator | }, 2026-04-07 03:54:02.099995 | orchestrator | "overall": { 2026-04-07 03:54:02.100039 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-07 03:54:02.100058 | orchestrator | } 2026-04-07 03:54:02.100076 | orchestrator | } 2026-04-07 03:54:02.157810 | orchestrator | 2026-04-07 03:54:02.157887 | orchestrator | # Ceph OSD tree 2026-04-07 03:54:02.157896 | orchestrator | 2026-04-07 03:54:02.157903 | orchestrator | + echo 2026-04-07 03:54:02.157910 | orchestrator | + echo '# Ceph OSD tree' 2026-04-07 03:54:02.157917 | orchestrator | + echo 2026-04-07 03:54:02.157924 | orchestrator | + ceph osd df tree 2026-04-07 03:54:02.700793 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-07 03:54:02.700871 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 377 MiB 113 GiB 5.87 1.00 - root default 2026-04-07 03:54:02.700878 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-04-07 03:54:02.700884 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 62 MiB 19 GiB 5.55 0.95 189 up osd.0 2026-04-07 03:54:02.700890 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.18 1.05 201 up osd.3 2026-04-07 03:54:02.700912 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-4 2026-04-07 03:54:02.700928 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.87 1.00 195 up osd.1 2026-04-07 03:54:02.700933 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.89 1.00 197 up osd.5 2026-04-07 03:54:02.700938 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-5 2026-04-07 03:54:02.700944 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.12 1.21 198 up osd.2 2026-04-07 03:54:02.700949 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 948 MiB 883 MiB 1 KiB 66 MiB 19 GiB 4.64 0.79 190 up osd.4 2026-04-07 03:54:02.700954 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 377 MiB 113 GiB 5.87 2026-04-07 03:54:02.700959 | orchestrator | MIN/MAX VAR: 0.79/1.21 STDDEV: 0.74 2026-04-07 03:54:02.767809 | orchestrator | 2026-04-07 03:54:02.767930 | orchestrator | # Ceph monitor status 2026-04-07 03:54:02.767947 | orchestrator | 2026-04-07 03:54:02.767958 | orchestrator | + echo 2026-04-07 03:54:02.767996 | orchestrator | + echo '# Ceph monitor status' 2026-04-07 03:54:02.768007 | orchestrator | + echo 2026-04-07 03:54:02.768018 | orchestrator | + ceph mon stat 2026-04-07 03:54:03.381789 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-07 03:54:03.439040 | orchestrator | 2026-04-07 03:54:03.439152 | orchestrator | # Ceph quorum status 2026-04-07 03:54:03.439169 | orchestrator | 2026-04-07 03:54:03.439182 | orchestrator | + echo 2026-04-07 03:54:03.439194 | orchestrator | + echo '# Ceph quorum status' 2026-04-07 03:54:03.439205 | orchestrator | + echo 2026-04-07 03:54:03.439216 | orchestrator | + ceph quorum_status 2026-04-07 03:54:03.439350 | orchestrator | + jq 2026-04-07 03:54:04.112981 | orchestrator | { 2026-04-07 03:54:04.113197 | orchestrator | "election_epoch": 8, 2026-04-07 03:54:04.113217 | orchestrator | "quorum": [ 2026-04-07 03:54:04.113230 | orchestrator | 0, 2026-04-07 03:54:04.113241 | orchestrator | 1, 2026-04-07 03:54:04.113252 | orchestrator | 2 2026-04-07 03:54:04.113263 | orchestrator | ], 2026-04-07 03:54:04.113274 | orchestrator | "quorum_names": [ 2026-04-07 03:54:04.113285 | orchestrator | "testbed-node-0", 2026-04-07 03:54:04.113296 | orchestrator | "testbed-node-1", 2026-04-07 03:54:04.113307 | orchestrator | "testbed-node-2" 2026-04-07 03:54:04.113318 | orchestrator | ], 2026-04-07 03:54:04.113330 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-07 03:54:04.113342 | orchestrator | "quorum_age": 4362, 2026-04-07 03:54:04.113365 | orchestrator | "features": { 2026-04-07 03:54:04.113377 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-07 03:54:04.113412 | orchestrator | "quorum_mon": [ 2026-04-07 03:54:04.113423 | orchestrator | "kraken", 2026-04-07 03:54:04.113434 | orchestrator | "luminous", 2026-04-07 03:54:04.113445 | orchestrator | "mimic", 2026-04-07 03:54:04.113456 | orchestrator | "osdmap-prune", 2026-04-07 03:54:04.113467 | orchestrator | "nautilus", 2026-04-07 03:54:04.113478 | orchestrator | "octopus", 2026-04-07 03:54:04.113489 | orchestrator | "pacific", 2026-04-07 03:54:04.113500 | orchestrator | "elector-pinging", 2026-04-07 03:54:04.113511 | orchestrator | "quincy", 2026-04-07 03:54:04.113522 | orchestrator | "reef" 2026-04-07 03:54:04.113533 | orchestrator | ] 2026-04-07 03:54:04.113545 | orchestrator | }, 2026-04-07 03:54:04.113556 | orchestrator | "monmap": { 2026-04-07 03:54:04.113566 | orchestrator | "epoch": 1, 2026-04-07 03:54:04.113578 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-07 03:54:04.113590 | orchestrator | "modified": "2026-04-07T02:41:03.352327Z", 2026-04-07 03:54:04.113601 | orchestrator | "created": "2026-04-07T02:41:03.352327Z", 2026-04-07 03:54:04.113612 | orchestrator | "min_mon_release": 18, 2026-04-07 03:54:04.113623 | orchestrator | "min_mon_release_name": "reef", 2026-04-07 03:54:04.113634 | orchestrator | "election_strategy": 1, 2026-04-07 03:54:04.113645 | orchestrator | "disallowed_leaders: ": "", 2026-04-07 03:54:04.113656 | orchestrator | "stretch_mode": false, 2026-04-07 03:54:04.113693 | orchestrator | "tiebreaker_mon": "", 2026-04-07 03:54:04.113705 | orchestrator | "removed_ranks: ": "", 2026-04-07 03:54:04.113716 | orchestrator | "features": { 2026-04-07 03:54:04.113727 | orchestrator | "persistent": [ 2026-04-07 03:54:04.113738 | orchestrator | "kraken", 2026-04-07 03:54:04.113749 | orchestrator | "luminous", 2026-04-07 03:54:04.113762 | orchestrator | "mimic", 2026-04-07 03:54:04.113775 | orchestrator | "osdmap-prune", 2026-04-07 03:54:04.113788 | orchestrator | "nautilus", 2026-04-07 03:54:04.113801 | orchestrator | "octopus", 2026-04-07 03:54:04.113815 | orchestrator | "pacific", 2026-04-07 03:54:04.113828 | orchestrator | "elector-pinging", 2026-04-07 03:54:04.113841 | orchestrator | "quincy", 2026-04-07 03:54:04.113853 | orchestrator | "reef" 2026-04-07 03:54:04.113866 | orchestrator | ], 2026-04-07 03:54:04.113880 | orchestrator | "optional": [] 2026-04-07 03:54:04.113892 | orchestrator | }, 2026-04-07 03:54:04.113905 | orchestrator | "mons": [ 2026-04-07 03:54:04.113918 | orchestrator | { 2026-04-07 03:54:04.113930 | orchestrator | "rank": 0, 2026-04-07 03:54:04.113943 | orchestrator | "name": "testbed-node-0", 2026-04-07 03:54:04.113956 | orchestrator | "public_addrs": { 2026-04-07 03:54:04.113969 | orchestrator | "addrvec": [ 2026-04-07 03:54:04.113982 | orchestrator | { 2026-04-07 03:54:04.113996 | orchestrator | "type": "v2", 2026-04-07 03:54:04.114010 | orchestrator | "addr": "192.168.16.8:3300", 2026-04-07 03:54:04.114083 | orchestrator | "nonce": 0 2026-04-07 03:54:04.114097 | orchestrator | }, 2026-04-07 03:54:04.114110 | orchestrator | { 2026-04-07 03:54:04.114124 | orchestrator | "type": "v1", 2026-04-07 03:54:04.114135 | orchestrator | "addr": "192.168.16.8:6789", 2026-04-07 03:54:04.114146 | orchestrator | "nonce": 0 2026-04-07 03:54:04.114157 | orchestrator | } 2026-04-07 03:54:04.114168 | orchestrator | ] 2026-04-07 03:54:04.114179 | orchestrator | }, 2026-04-07 03:54:04.114190 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-04-07 03:54:04.114201 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-04-07 03:54:04.114212 | orchestrator | "priority": 0, 2026-04-07 03:54:04.114223 | orchestrator | "weight": 0, 2026-04-07 03:54:04.114234 | orchestrator | "crush_location": "{}" 2026-04-07 03:54:04.114266 | orchestrator | }, 2026-04-07 03:54:04.114277 | orchestrator | { 2026-04-07 03:54:04.114288 | orchestrator | "rank": 1, 2026-04-07 03:54:04.114307 | orchestrator | "name": "testbed-node-1", 2026-04-07 03:54:04.114326 | orchestrator | "public_addrs": { 2026-04-07 03:54:04.114344 | orchestrator | "addrvec": [ 2026-04-07 03:54:04.114362 | orchestrator | { 2026-04-07 03:54:04.114406 | orchestrator | "type": "v2", 2026-04-07 03:54:04.114425 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-07 03:54:04.114441 | orchestrator | "nonce": 0 2026-04-07 03:54:04.114459 | orchestrator | }, 2026-04-07 03:54:04.114474 | orchestrator | { 2026-04-07 03:54:04.114491 | orchestrator | "type": "v1", 2026-04-07 03:54:04.114507 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-07 03:54:04.114523 | orchestrator | "nonce": 0 2026-04-07 03:54:04.114539 | orchestrator | } 2026-04-07 03:54:04.114556 | orchestrator | ] 2026-04-07 03:54:04.114575 | orchestrator | }, 2026-04-07 03:54:04.114593 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-07 03:54:04.114611 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-07 03:54:04.114630 | orchestrator | "priority": 0, 2026-04-07 03:54:04.114647 | orchestrator | "weight": 0, 2026-04-07 03:54:04.114666 | orchestrator | "crush_location": "{}" 2026-04-07 03:54:04.114685 | orchestrator | }, 2026-04-07 03:54:04.114699 | orchestrator | { 2026-04-07 03:54:04.114710 | orchestrator | "rank": 2, 2026-04-07 03:54:04.114721 | orchestrator | "name": "testbed-node-2", 2026-04-07 03:54:04.114732 | orchestrator | "public_addrs": { 2026-04-07 03:54:04.114743 | orchestrator | "addrvec": [ 2026-04-07 03:54:04.114754 | orchestrator | { 2026-04-07 03:54:04.114765 | orchestrator | "type": "v2", 2026-04-07 03:54:04.114776 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-07 03:54:04.114794 | orchestrator | "nonce": 0 2026-04-07 03:54:04.114810 | orchestrator | }, 2026-04-07 03:54:04.114821 | orchestrator | { 2026-04-07 03:54:04.114833 | orchestrator | "type": "v1", 2026-04-07 03:54:04.114844 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-07 03:54:04.114855 | orchestrator | "nonce": 0 2026-04-07 03:54:04.114880 | orchestrator | } 2026-04-07 03:54:04.114891 | orchestrator | ] 2026-04-07 03:54:04.114902 | orchestrator | }, 2026-04-07 03:54:04.114913 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-07 03:54:04.114925 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-07 03:54:04.114936 | orchestrator | "priority": 0, 2026-04-07 03:54:04.114947 | orchestrator | "weight": 0, 2026-04-07 03:54:04.114973 | orchestrator | "crush_location": "{}" 2026-04-07 03:54:04.114984 | orchestrator | } 2026-04-07 03:54:04.114995 | orchestrator | ] 2026-04-07 03:54:04.115006 | orchestrator | } 2026-04-07 03:54:04.115017 | orchestrator | } 2026-04-07 03:54:04.115044 | orchestrator | 2026-04-07 03:54:04.115056 | orchestrator | # Ceph free space status 2026-04-07 03:54:04.115067 | orchestrator | 2026-04-07 03:54:04.115078 | orchestrator | + echo 2026-04-07 03:54:04.115089 | orchestrator | + echo '# Ceph free space status' 2026-04-07 03:54:04.115100 | orchestrator | + echo 2026-04-07 03:54:04.115111 | orchestrator | + ceph df 2026-04-07 03:54:04.749985 | orchestrator | --- RAW STORAGE --- 2026-04-07 03:54:04.750108 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-07 03:54:04.750128 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-04-07 03:54:04.750159 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-04-07 03:54:04.750166 | orchestrator | 2026-04-07 03:54:04.750173 | orchestrator | --- POOLS --- 2026-04-07 03:54:04.750181 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-07 03:54:04.750188 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-07 03:54:04.750195 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-07 03:54:04.750201 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-07 03:54:04.750208 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-07 03:54:04.750214 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-07 03:54:04.750222 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-07 03:54:04.750228 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-07 03:54:04.750235 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-07 03:54:04.750241 | orchestrator | .rgw.root 9 32 1.4 KiB 4 32 KiB 0 53 GiB 2026-04-07 03:54:04.750247 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-07 03:54:04.750253 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-07 03:54:04.750260 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-04-07 03:54:04.750266 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-07 03:54:04.750272 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-07 03:54:04.820923 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-07 03:54:04.883943 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-07 03:54:04.884034 | orchestrator | + osism apply facts 2026-04-07 03:54:17.318277 | orchestrator | 2026-04-07 03:54:17 | INFO  | Task 292afa7e-4336-4ddc-a570-a1730b6c678c (facts) was prepared for execution. 2026-04-07 03:54:17.318417 | orchestrator | 2026-04-07 03:54:17 | INFO  | It takes a moment until task 292afa7e-4336-4ddc-a570-a1730b6c678c (facts) has been started and output is visible here. 2026-04-07 03:54:31.960156 | orchestrator | 2026-04-07 03:54:31.960277 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-07 03:54:31.960294 | orchestrator | 2026-04-07 03:54:31.960307 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-07 03:54:31.960318 | orchestrator | Tuesday 07 April 2026 03:54:22 +0000 (0:00:00.324) 0:00:00.324 ********* 2026-04-07 03:54:31.960330 | orchestrator | ok: [testbed-manager] 2026-04-07 03:54:31.960342 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:54:31.960353 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:54:31.960457 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:54:31.960477 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:54:31.960494 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:54:31.960547 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:54:31.960567 | orchestrator | 2026-04-07 03:54:31.960588 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-07 03:54:31.960605 | orchestrator | Tuesday 07 April 2026 03:54:23 +0000 (0:00:01.227) 0:00:01.552 ********* 2026-04-07 03:54:31.960623 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:54:31.960642 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:54:31.960660 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:54:31.960677 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:54:31.960694 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:54:31.960714 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:54:31.960734 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:54:31.960754 | orchestrator | 2026-04-07 03:54:31.960773 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 03:54:31.960794 | orchestrator | 2026-04-07 03:54:31.960813 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 03:54:31.960835 | orchestrator | Tuesday 07 April 2026 03:54:24 +0000 (0:00:01.527) 0:00:03.080 ********* 2026-04-07 03:54:31.960855 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:54:31.960874 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:54:31.960893 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:54:31.960913 | orchestrator | ok: [testbed-manager] 2026-04-07 03:54:31.960933 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:54:31.960953 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:54:31.961036 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:54:31.961059 | orchestrator | 2026-04-07 03:54:31.961080 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-07 03:54:31.961102 | orchestrator | 2026-04-07 03:54:31.961124 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-07 03:54:31.961146 | orchestrator | Tuesday 07 April 2026 03:54:30 +0000 (0:00:05.820) 0:00:08.900 ********* 2026-04-07 03:54:31.961166 | orchestrator | skipping: [testbed-manager] 2026-04-07 03:54:31.961186 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:54:31.961205 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:54:31.961225 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:54:31.961242 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:54:31.961254 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:54:31.961265 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:54:31.961276 | orchestrator | 2026-04-07 03:54:31.961287 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:54:31.961299 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:54:31.961313 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:54:31.961324 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:54:31.961335 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:54:31.961346 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:54:31.961389 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:54:31.961402 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:54:31.961413 | orchestrator | 2026-04-07 03:54:31.961424 | orchestrator | 2026-04-07 03:54:31.961435 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:54:31.961446 | orchestrator | Tuesday 07 April 2026 03:54:31 +0000 (0:00:00.627) 0:00:09.527 ********* 2026-04-07 03:54:31.961473 | orchestrator | =============================================================================== 2026-04-07 03:54:31.961485 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.82s 2026-04-07 03:54:31.961496 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.53s 2026-04-07 03:54:31.961506 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.23s 2026-04-07 03:54:31.961517 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-04-07 03:54:32.396609 | orchestrator | + osism validate ceph-mons 2026-04-07 03:55:07.270820 | orchestrator | 2026-04-07 03:55:07.270938 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-07 03:55:07.270949 | orchestrator | 2026-04-07 03:55:07.270953 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-07 03:55:07.270971 | orchestrator | Tuesday 07 April 2026 03:54:49 +0000 (0:00:00.503) 0:00:00.503 ********* 2026-04-07 03:55:07.270977 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:07.270982 | orchestrator | 2026-04-07 03:55:07.270986 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-07 03:55:07.270990 | orchestrator | Tuesday 07 April 2026 03:54:50 +0000 (0:00:00.965) 0:00:01.469 ********* 2026-04-07 03:55:07.270994 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:07.270998 | orchestrator | 2026-04-07 03:55:07.271003 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-07 03:55:07.271007 | orchestrator | Tuesday 07 April 2026 03:54:51 +0000 (0:00:01.101) 0:00:02.571 ********* 2026-04-07 03:55:07.271011 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271016 | orchestrator | 2026-04-07 03:55:07.271020 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-07 03:55:07.271024 | orchestrator | Tuesday 07 April 2026 03:54:51 +0000 (0:00:00.138) 0:00:02.709 ********* 2026-04-07 03:55:07.271028 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271032 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:07.271035 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:07.271039 | orchestrator | 2026-04-07 03:55:07.271043 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-07 03:55:07.271047 | orchestrator | Tuesday 07 April 2026 03:54:52 +0000 (0:00:00.426) 0:00:03.136 ********* 2026-04-07 03:55:07.271051 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:07.271055 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:07.271058 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271062 | orchestrator | 2026-04-07 03:55:07.271066 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-07 03:55:07.271070 | orchestrator | Tuesday 07 April 2026 03:54:53 +0000 (0:00:01.080) 0:00:04.216 ********* 2026-04-07 03:55:07.271074 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271078 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:55:07.271082 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:55:07.271085 | orchestrator | 2026-04-07 03:55:07.271089 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-07 03:55:07.271093 | orchestrator | Tuesday 07 April 2026 03:54:53 +0000 (0:00:00.355) 0:00:04.572 ********* 2026-04-07 03:55:07.271097 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271101 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:07.271104 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:07.271108 | orchestrator | 2026-04-07 03:55:07.271112 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-07 03:55:07.271116 | orchestrator | Tuesday 07 April 2026 03:54:54 +0000 (0:00:00.551) 0:00:05.123 ********* 2026-04-07 03:55:07.271120 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271123 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:07.271127 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:07.271131 | orchestrator | 2026-04-07 03:55:07.271135 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-07 03:55:07.271154 | orchestrator | Tuesday 07 April 2026 03:54:54 +0000 (0:00:00.331) 0:00:05.454 ********* 2026-04-07 03:55:07.271158 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271162 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:55:07.271166 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:55:07.271170 | orchestrator | 2026-04-07 03:55:07.271173 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-07 03:55:07.271177 | orchestrator | Tuesday 07 April 2026 03:54:55 +0000 (0:00:00.327) 0:00:05.782 ********* 2026-04-07 03:55:07.271181 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271185 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:07.271188 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:07.271192 | orchestrator | 2026-04-07 03:55:07.271196 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-07 03:55:07.271203 | orchestrator | Tuesday 07 April 2026 03:54:55 +0000 (0:00:00.580) 0:00:06.362 ********* 2026-04-07 03:55:07.271207 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271211 | orchestrator | 2026-04-07 03:55:07.271215 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-07 03:55:07.271219 | orchestrator | Tuesday 07 April 2026 03:54:55 +0000 (0:00:00.286) 0:00:06.649 ********* 2026-04-07 03:55:07.271222 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271226 | orchestrator | 2026-04-07 03:55:07.271230 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-07 03:55:07.271234 | orchestrator | Tuesday 07 April 2026 03:54:56 +0000 (0:00:00.254) 0:00:06.903 ********* 2026-04-07 03:55:07.271238 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271241 | orchestrator | 2026-04-07 03:55:07.271245 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:07.271249 | orchestrator | Tuesday 07 April 2026 03:54:56 +0000 (0:00:00.274) 0:00:07.177 ********* 2026-04-07 03:55:07.271253 | orchestrator | 2026-04-07 03:55:07.271256 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:07.271260 | orchestrator | Tuesday 07 April 2026 03:54:56 +0000 (0:00:00.073) 0:00:07.251 ********* 2026-04-07 03:55:07.271264 | orchestrator | 2026-04-07 03:55:07.271268 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:07.271271 | orchestrator | Tuesday 07 April 2026 03:54:56 +0000 (0:00:00.082) 0:00:07.333 ********* 2026-04-07 03:55:07.271275 | orchestrator | 2026-04-07 03:55:07.271279 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-07 03:55:07.271283 | orchestrator | Tuesday 07 April 2026 03:54:56 +0000 (0:00:00.103) 0:00:07.437 ********* 2026-04-07 03:55:07.271287 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271290 | orchestrator | 2026-04-07 03:55:07.271294 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-07 03:55:07.271298 | orchestrator | Tuesday 07 April 2026 03:54:57 +0000 (0:00:00.291) 0:00:07.728 ********* 2026-04-07 03:55:07.271302 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271305 | orchestrator | 2026-04-07 03:55:07.271320 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-07 03:55:07.271398 | orchestrator | Tuesday 07 April 2026 03:54:57 +0000 (0:00:00.254) 0:00:07.983 ********* 2026-04-07 03:55:07.271405 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271410 | orchestrator | 2026-04-07 03:55:07.271414 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-07 03:55:07.271419 | orchestrator | Tuesday 07 April 2026 03:54:57 +0000 (0:00:00.139) 0:00:08.123 ********* 2026-04-07 03:55:07.271424 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:55:07.271429 | orchestrator | 2026-04-07 03:55:07.271436 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-07 03:55:07.271441 | orchestrator | Tuesday 07 April 2026 03:54:59 +0000 (0:00:01.924) 0:00:10.047 ********* 2026-04-07 03:55:07.271445 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271450 | orchestrator | 2026-04-07 03:55:07.271459 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-07 03:55:07.271464 | orchestrator | Tuesday 07 April 2026 03:54:59 +0000 (0:00:00.602) 0:00:10.649 ********* 2026-04-07 03:55:07.271469 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271473 | orchestrator | 2026-04-07 03:55:07.271478 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-07 03:55:07.271482 | orchestrator | Tuesday 07 April 2026 03:55:00 +0000 (0:00:00.143) 0:00:10.793 ********* 2026-04-07 03:55:07.271487 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271491 | orchestrator | 2026-04-07 03:55:07.271496 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-07 03:55:07.271500 | orchestrator | Tuesday 07 April 2026 03:55:00 +0000 (0:00:00.370) 0:00:11.164 ********* 2026-04-07 03:55:07.271505 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271509 | orchestrator | 2026-04-07 03:55:07.271514 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-07 03:55:07.271518 | orchestrator | Tuesday 07 April 2026 03:55:00 +0000 (0:00:00.325) 0:00:11.489 ********* 2026-04-07 03:55:07.271523 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271527 | orchestrator | 2026-04-07 03:55:07.271532 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-07 03:55:07.271536 | orchestrator | Tuesday 07 April 2026 03:55:00 +0000 (0:00:00.134) 0:00:11.624 ********* 2026-04-07 03:55:07.271541 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271545 | orchestrator | 2026-04-07 03:55:07.271550 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-07 03:55:07.271555 | orchestrator | Tuesday 07 April 2026 03:55:01 +0000 (0:00:00.164) 0:00:11.788 ********* 2026-04-07 03:55:07.271559 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271563 | orchestrator | 2026-04-07 03:55:07.271567 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-07 03:55:07.271571 | orchestrator | Tuesday 07 April 2026 03:55:01 +0000 (0:00:00.127) 0:00:11.916 ********* 2026-04-07 03:55:07.271575 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:55:07.271578 | orchestrator | 2026-04-07 03:55:07.271582 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-07 03:55:07.271586 | orchestrator | Tuesday 07 April 2026 03:55:02 +0000 (0:00:01.432) 0:00:13.349 ********* 2026-04-07 03:55:07.271590 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271594 | orchestrator | 2026-04-07 03:55:07.271597 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-07 03:55:07.271601 | orchestrator | Tuesday 07 April 2026 03:55:02 +0000 (0:00:00.321) 0:00:13.670 ********* 2026-04-07 03:55:07.271605 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271609 | orchestrator | 2026-04-07 03:55:07.271613 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-07 03:55:07.271616 | orchestrator | Tuesday 07 April 2026 03:55:03 +0000 (0:00:00.162) 0:00:13.833 ********* 2026-04-07 03:55:07.271620 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:07.271624 | orchestrator | 2026-04-07 03:55:07.271628 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-07 03:55:07.271634 | orchestrator | Tuesday 07 April 2026 03:55:03 +0000 (0:00:00.176) 0:00:14.010 ********* 2026-04-07 03:55:07.271638 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271642 | orchestrator | 2026-04-07 03:55:07.271646 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-07 03:55:07.271650 | orchestrator | Tuesday 07 April 2026 03:55:03 +0000 (0:00:00.143) 0:00:14.153 ********* 2026-04-07 03:55:07.271653 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271657 | orchestrator | 2026-04-07 03:55:07.271661 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-07 03:55:07.271665 | orchestrator | Tuesday 07 April 2026 03:55:03 +0000 (0:00:00.427) 0:00:14.580 ********* 2026-04-07 03:55:07.271669 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:07.271673 | orchestrator | 2026-04-07 03:55:07.271680 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-07 03:55:07.271684 | orchestrator | Tuesday 07 April 2026 03:55:04 +0000 (0:00:00.284) 0:00:14.864 ********* 2026-04-07 03:55:07.271688 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:07.271692 | orchestrator | 2026-04-07 03:55:07.271695 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-07 03:55:07.271699 | orchestrator | Tuesday 07 April 2026 03:55:04 +0000 (0:00:00.295) 0:00:15.160 ********* 2026-04-07 03:55:07.271703 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:07.271707 | orchestrator | 2026-04-07 03:55:07.271711 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-07 03:55:07.271714 | orchestrator | Tuesday 07 April 2026 03:55:06 +0000 (0:00:01.960) 0:00:17.120 ********* 2026-04-07 03:55:07.271718 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:07.271722 | orchestrator | 2026-04-07 03:55:07.271726 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-07 03:55:07.271730 | orchestrator | Tuesday 07 April 2026 03:55:06 +0000 (0:00:00.301) 0:00:17.422 ********* 2026-04-07 03:55:07.271733 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:07.271737 | orchestrator | 2026-04-07 03:55:07.271745 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:10.291821 | orchestrator | Tuesday 07 April 2026 03:55:07 +0000 (0:00:00.296) 0:00:17.718 ********* 2026-04-07 03:55:10.291895 | orchestrator | 2026-04-07 03:55:10.291901 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:10.291906 | orchestrator | Tuesday 07 April 2026 03:55:07 +0000 (0:00:00.095) 0:00:17.814 ********* 2026-04-07 03:55:10.291910 | orchestrator | 2026-04-07 03:55:10.291914 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:10.291918 | orchestrator | Tuesday 07 April 2026 03:55:07 +0000 (0:00:00.080) 0:00:17.895 ********* 2026-04-07 03:55:10.291922 | orchestrator | 2026-04-07 03:55:10.291926 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-07 03:55:10.291939 | orchestrator | Tuesday 07 April 2026 03:55:07 +0000 (0:00:00.083) 0:00:17.978 ********* 2026-04-07 03:55:10.291944 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:10.291948 | orchestrator | 2026-04-07 03:55:10.291958 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-07 03:55:10.291962 | orchestrator | Tuesday 07 April 2026 03:55:08 +0000 (0:00:01.664) 0:00:19.643 ********* 2026-04-07 03:55:10.291965 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-07 03:55:10.291969 | orchestrator |  "msg": [ 2026-04-07 03:55:10.291974 | orchestrator |  "Validator run completed.", 2026-04-07 03:55:10.291978 | orchestrator |  "You can find the report file here:", 2026-04-07 03:55:10.291982 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-07T03:54:50+00:00-report.json", 2026-04-07 03:55:10.291987 | orchestrator |  "on the following host:", 2026-04-07 03:55:10.291991 | orchestrator |  "testbed-manager" 2026-04-07 03:55:10.291995 | orchestrator |  ] 2026-04-07 03:55:10.291999 | orchestrator | } 2026-04-07 03:55:10.292004 | orchestrator | 2026-04-07 03:55:10.292007 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:55:10.292012 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-07 03:55:10.292017 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:55:10.292022 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:55:10.292025 | orchestrator | 2026-04-07 03:55:10.292048 | orchestrator | 2026-04-07 03:55:10.292052 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:55:10.292056 | orchestrator | Tuesday 07 April 2026 03:55:09 +0000 (0:00:00.920) 0:00:20.564 ********* 2026-04-07 03:55:10.292060 | orchestrator | =============================================================================== 2026-04-07 03:55:10.292063 | orchestrator | Aggregate test results step one ----------------------------------------- 1.96s 2026-04-07 03:55:10.292067 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.92s 2026-04-07 03:55:10.292071 | orchestrator | Write report file ------------------------------------------------------- 1.66s 2026-04-07 03:55:10.292075 | orchestrator | Gather status data ------------------------------------------------------ 1.43s 2026-04-07 03:55:10.292078 | orchestrator | Create report output directory ------------------------------------------ 1.10s 2026-04-07 03:55:10.292082 | orchestrator | Get container info ------------------------------------------------------ 1.08s 2026-04-07 03:55:10.292086 | orchestrator | Get timestamp for report file ------------------------------------------- 0.97s 2026-04-07 03:55:10.292090 | orchestrator | Print report file information ------------------------------------------- 0.92s 2026-04-07 03:55:10.292094 | orchestrator | Set quorum test data ---------------------------------------------------- 0.60s 2026-04-07 03:55:10.292098 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.58s 2026-04-07 03:55:10.292102 | orchestrator | Set test result to passed if container is existing ---------------------- 0.55s 2026-04-07 03:55:10.292106 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.43s 2026-04-07 03:55:10.292110 | orchestrator | Prepare test data for container existance test -------------------------- 0.43s 2026-04-07 03:55:10.292114 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.37s 2026-04-07 03:55:10.292117 | orchestrator | Set test result to failed if container is missing ----------------------- 0.36s 2026-04-07 03:55:10.292121 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-04-07 03:55:10.292125 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.33s 2026-04-07 03:55:10.292129 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2026-04-07 03:55:10.292132 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-04-07 03:55:10.292136 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-04-07 03:55:10.717651 | orchestrator | + osism validate ceph-mgrs 2026-04-07 03:55:44.428182 | orchestrator | 2026-04-07 03:55:44.428342 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-07 03:55:44.428362 | orchestrator | 2026-04-07 03:55:44.428374 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-07 03:55:44.428385 | orchestrator | Tuesday 07 April 2026 03:55:28 +0000 (0:00:00.516) 0:00:00.516 ********* 2026-04-07 03:55:44.428397 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:44.428408 | orchestrator | 2026-04-07 03:55:44.428418 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-07 03:55:44.428429 | orchestrator | Tuesday 07 April 2026 03:55:29 +0000 (0:00:00.998) 0:00:01.515 ********* 2026-04-07 03:55:44.428458 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:44.428469 | orchestrator | 2026-04-07 03:55:44.428479 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-07 03:55:44.428491 | orchestrator | Tuesday 07 April 2026 03:55:30 +0000 (0:00:01.110) 0:00:02.625 ********* 2026-04-07 03:55:44.428502 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.428514 | orchestrator | 2026-04-07 03:55:44.428525 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-07 03:55:44.428537 | orchestrator | Tuesday 07 April 2026 03:55:30 +0000 (0:00:00.149) 0:00:02.774 ********* 2026-04-07 03:55:44.428548 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.428560 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:44.428592 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:44.428604 | orchestrator | 2026-04-07 03:55:44.428615 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-07 03:55:44.428626 | orchestrator | Tuesday 07 April 2026 03:55:30 +0000 (0:00:00.358) 0:00:03.133 ********* 2026-04-07 03:55:44.428639 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:44.428653 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:44.428666 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.428679 | orchestrator | 2026-04-07 03:55:44.428693 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-07 03:55:44.428706 | orchestrator | Tuesday 07 April 2026 03:55:31 +0000 (0:00:01.087) 0:00:04.221 ********* 2026-04-07 03:55:44.428720 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.428735 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:55:44.428750 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:55:44.428765 | orchestrator | 2026-04-07 03:55:44.428779 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-07 03:55:44.428823 | orchestrator | Tuesday 07 April 2026 03:55:32 +0000 (0:00:00.364) 0:00:04.585 ********* 2026-04-07 03:55:44.428840 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.428856 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:44.428870 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:44.428885 | orchestrator | 2026-04-07 03:55:44.428899 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-07 03:55:44.428913 | orchestrator | Tuesday 07 April 2026 03:55:32 +0000 (0:00:00.555) 0:00:05.141 ********* 2026-04-07 03:55:44.428925 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.428938 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:44.428950 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:44.428962 | orchestrator | 2026-04-07 03:55:44.428973 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-07 03:55:44.428985 | orchestrator | Tuesday 07 April 2026 03:55:33 +0000 (0:00:00.367) 0:00:05.508 ********* 2026-04-07 03:55:44.428997 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.429008 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:55:44.429020 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:55:44.429031 | orchestrator | 2026-04-07 03:55:44.429042 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-07 03:55:44.429054 | orchestrator | Tuesday 07 April 2026 03:55:33 +0000 (0:00:00.333) 0:00:05.841 ********* 2026-04-07 03:55:44.429066 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.429078 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:55:44.429090 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:55:44.429102 | orchestrator | 2026-04-07 03:55:44.429113 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-07 03:55:44.429125 | orchestrator | Tuesday 07 April 2026 03:55:34 +0000 (0:00:00.553) 0:00:06.394 ********* 2026-04-07 03:55:44.429137 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.429149 | orchestrator | 2026-04-07 03:55:44.429161 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-07 03:55:44.429174 | orchestrator | Tuesday 07 April 2026 03:55:34 +0000 (0:00:00.288) 0:00:06.683 ********* 2026-04-07 03:55:44.429187 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.429200 | orchestrator | 2026-04-07 03:55:44.429212 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-07 03:55:44.429235 | orchestrator | Tuesday 07 April 2026 03:55:34 +0000 (0:00:00.287) 0:00:06.970 ********* 2026-04-07 03:55:44.429247 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.429259 | orchestrator | 2026-04-07 03:55:44.429271 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:44.429283 | orchestrator | Tuesday 07 April 2026 03:55:34 +0000 (0:00:00.296) 0:00:07.266 ********* 2026-04-07 03:55:44.429320 | orchestrator | 2026-04-07 03:55:44.429333 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:44.429345 | orchestrator | Tuesday 07 April 2026 03:55:34 +0000 (0:00:00.077) 0:00:07.344 ********* 2026-04-07 03:55:44.429368 | orchestrator | 2026-04-07 03:55:44.429376 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:44.429384 | orchestrator | Tuesday 07 April 2026 03:55:35 +0000 (0:00:00.083) 0:00:07.427 ********* 2026-04-07 03:55:44.429391 | orchestrator | 2026-04-07 03:55:44.429399 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-07 03:55:44.429406 | orchestrator | Tuesday 07 April 2026 03:55:35 +0000 (0:00:00.111) 0:00:07.538 ********* 2026-04-07 03:55:44.429414 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.429421 | orchestrator | 2026-04-07 03:55:44.429429 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-07 03:55:44.429436 | orchestrator | Tuesday 07 April 2026 03:55:35 +0000 (0:00:00.279) 0:00:07.818 ********* 2026-04-07 03:55:44.429443 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.429451 | orchestrator | 2026-04-07 03:55:44.429478 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-07 03:55:44.429486 | orchestrator | Tuesday 07 April 2026 03:55:35 +0000 (0:00:00.268) 0:00:08.086 ********* 2026-04-07 03:55:44.429493 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.429501 | orchestrator | 2026-04-07 03:55:44.429508 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-07 03:55:44.429516 | orchestrator | Tuesday 07 April 2026 03:55:35 +0000 (0:00:00.133) 0:00:08.220 ********* 2026-04-07 03:55:44.429523 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:55:44.429530 | orchestrator | 2026-04-07 03:55:44.429538 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-07 03:55:44.429545 | orchestrator | Tuesday 07 April 2026 03:55:38 +0000 (0:00:02.211) 0:00:10.431 ********* 2026-04-07 03:55:44.429552 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.429559 | orchestrator | 2026-04-07 03:55:44.429567 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-07 03:55:44.429574 | orchestrator | Tuesday 07 April 2026 03:55:38 +0000 (0:00:00.516) 0:00:10.948 ********* 2026-04-07 03:55:44.429581 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.429589 | orchestrator | 2026-04-07 03:55:44.429596 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-07 03:55:44.429603 | orchestrator | Tuesday 07 April 2026 03:55:38 +0000 (0:00:00.359) 0:00:11.307 ********* 2026-04-07 03:55:44.429611 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.429618 | orchestrator | 2026-04-07 03:55:44.429625 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-07 03:55:44.429633 | orchestrator | Tuesday 07 April 2026 03:55:39 +0000 (0:00:00.150) 0:00:11.458 ********* 2026-04-07 03:55:44.429640 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:55:44.429647 | orchestrator | 2026-04-07 03:55:44.429655 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-07 03:55:44.429662 | orchestrator | Tuesday 07 April 2026 03:55:39 +0000 (0:00:00.185) 0:00:11.643 ********* 2026-04-07 03:55:44.429669 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:44.429677 | orchestrator | 2026-04-07 03:55:44.429684 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-07 03:55:44.429692 | orchestrator | Tuesday 07 April 2026 03:55:39 +0000 (0:00:00.284) 0:00:11.928 ********* 2026-04-07 03:55:44.429699 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:55:44.429706 | orchestrator | 2026-04-07 03:55:44.429714 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-07 03:55:44.429721 | orchestrator | Tuesday 07 April 2026 03:55:39 +0000 (0:00:00.277) 0:00:12.206 ********* 2026-04-07 03:55:44.429729 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:44.429736 | orchestrator | 2026-04-07 03:55:44.429743 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-07 03:55:44.429751 | orchestrator | Tuesday 07 April 2026 03:55:41 +0000 (0:00:01.500) 0:00:13.706 ********* 2026-04-07 03:55:44.429758 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:44.429771 | orchestrator | 2026-04-07 03:55:44.429779 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-07 03:55:44.429786 | orchestrator | Tuesday 07 April 2026 03:55:41 +0000 (0:00:00.281) 0:00:13.988 ********* 2026-04-07 03:55:44.429794 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:44.429801 | orchestrator | 2026-04-07 03:55:44.429808 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:44.429816 | orchestrator | Tuesday 07 April 2026 03:55:41 +0000 (0:00:00.263) 0:00:14.252 ********* 2026-04-07 03:55:44.429823 | orchestrator | 2026-04-07 03:55:44.429831 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:44.429838 | orchestrator | Tuesday 07 April 2026 03:55:41 +0000 (0:00:00.081) 0:00:14.334 ********* 2026-04-07 03:55:44.429846 | orchestrator | 2026-04-07 03:55:44.429853 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:55:44.429860 | orchestrator | Tuesday 07 April 2026 03:55:42 +0000 (0:00:00.082) 0:00:14.417 ********* 2026-04-07 03:55:44.429867 | orchestrator | 2026-04-07 03:55:44.429875 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-07 03:55:44.429882 | orchestrator | Tuesday 07 April 2026 03:55:42 +0000 (0:00:00.303) 0:00:14.720 ********* 2026-04-07 03:55:44.429889 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-07 03:55:44.429897 | orchestrator | 2026-04-07 03:55:44.429909 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-07 03:55:44.429916 | orchestrator | Tuesday 07 April 2026 03:55:43 +0000 (0:00:01.601) 0:00:16.321 ********* 2026-04-07 03:55:44.429924 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-07 03:55:44.429931 | orchestrator |  "msg": [ 2026-04-07 03:55:44.429939 | orchestrator |  "Validator run completed.", 2026-04-07 03:55:44.429946 | orchestrator |  "You can find the report file here:", 2026-04-07 03:55:44.429965 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-07T03:55:28+00:00-report.json", 2026-04-07 03:55:44.429983 | orchestrator |  "on the following host:", 2026-04-07 03:55:44.429991 | orchestrator |  "testbed-manager" 2026-04-07 03:55:44.429998 | orchestrator |  ] 2026-04-07 03:55:44.430006 | orchestrator | } 2026-04-07 03:55:44.430060 | orchestrator | 2026-04-07 03:55:44.430070 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:55:44.430079 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-07 03:55:44.430088 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:55:44.430102 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:55:44.845351 | orchestrator | 2026-04-07 03:55:44.845430 | orchestrator | 2026-04-07 03:55:44.845438 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:55:44.845446 | orchestrator | Tuesday 07 April 2026 03:55:44 +0000 (0:00:00.456) 0:00:16.778 ********* 2026-04-07 03:55:44.845452 | orchestrator | =============================================================================== 2026-04-07 03:55:44.845457 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.21s 2026-04-07 03:55:44.845463 | orchestrator | Write report file ------------------------------------------------------- 1.60s 2026-04-07 03:55:44.845469 | orchestrator | Aggregate test results step one ----------------------------------------- 1.50s 2026-04-07 03:55:44.845475 | orchestrator | Create report output directory ------------------------------------------ 1.11s 2026-04-07 03:55:44.845480 | orchestrator | Get container info ------------------------------------------------------ 1.09s 2026-04-07 03:55:44.845486 | orchestrator | Get timestamp for report file ------------------------------------------- 1.00s 2026-04-07 03:55:44.845509 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-04-07 03:55:44.845515 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.55s 2026-04-07 03:55:44.845521 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.52s 2026-04-07 03:55:44.845530 | orchestrator | Flush handlers ---------------------------------------------------------- 0.47s 2026-04-07 03:55:44.845539 | orchestrator | Print report file information ------------------------------------------- 0.46s 2026-04-07 03:55:44.845547 | orchestrator | Prepare test data ------------------------------------------------------- 0.37s 2026-04-07 03:55:44.845555 | orchestrator | Set test result to failed if container is missing ----------------------- 0.36s 2026-04-07 03:55:44.845564 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.36s 2026-04-07 03:55:44.845572 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2026-04-07 03:55:44.845581 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2026-04-07 03:55:44.845589 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2026-04-07 03:55:44.845597 | orchestrator | Aggregate test results step one ----------------------------------------- 0.29s 2026-04-07 03:55:44.845605 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-04-07 03:55:44.845615 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-04-07 03:55:45.298550 | orchestrator | + osism validate ceph-osds 2026-04-07 03:56:07.714367 | orchestrator | 2026-04-07 03:56:07.714448 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-07 03:56:07.714457 | orchestrator | 2026-04-07 03:56:07.714464 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-07 03:56:07.714469 | orchestrator | Tuesday 07 April 2026 03:56:02 +0000 (0:00:00.463) 0:00:00.463 ********* 2026-04-07 03:56:07.714476 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 03:56:07.714481 | orchestrator | 2026-04-07 03:56:07.714486 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-07 03:56:07.714491 | orchestrator | Tuesday 07 April 2026 03:56:03 +0000 (0:00:00.891) 0:00:01.354 ********* 2026-04-07 03:56:07.714495 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 03:56:07.714500 | orchestrator | 2026-04-07 03:56:07.714505 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-07 03:56:07.714510 | orchestrator | Tuesday 07 April 2026 03:56:04 +0000 (0:00:00.564) 0:00:01.919 ********* 2026-04-07 03:56:07.714514 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 03:56:07.714519 | orchestrator | 2026-04-07 03:56:07.714524 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-07 03:56:07.714529 | orchestrator | Tuesday 07 April 2026 03:56:04 +0000 (0:00:00.807) 0:00:02.726 ********* 2026-04-07 03:56:07.714533 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:07.714539 | orchestrator | 2026-04-07 03:56:07.714544 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-07 03:56:07.714549 | orchestrator | Tuesday 07 April 2026 03:56:05 +0000 (0:00:00.151) 0:00:02.877 ********* 2026-04-07 03:56:07.714554 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:07.714560 | orchestrator | 2026-04-07 03:56:07.714569 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-07 03:56:07.714576 | orchestrator | Tuesday 07 April 2026 03:56:05 +0000 (0:00:00.146) 0:00:03.024 ********* 2026-04-07 03:56:07.714583 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:07.714590 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:07.714598 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:07.714604 | orchestrator | 2026-04-07 03:56:07.714612 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-07 03:56:07.714619 | orchestrator | Tuesday 07 April 2026 03:56:05 +0000 (0:00:00.358) 0:00:03.383 ********* 2026-04-07 03:56:07.714645 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:07.714652 | orchestrator | 2026-04-07 03:56:07.714659 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-07 03:56:07.714666 | orchestrator | Tuesday 07 April 2026 03:56:05 +0000 (0:00:00.154) 0:00:03.537 ********* 2026-04-07 03:56:07.714673 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:07.714680 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:07.714687 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:07.714694 | orchestrator | 2026-04-07 03:56:07.714701 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-07 03:56:07.714708 | orchestrator | Tuesday 07 April 2026 03:56:06 +0000 (0:00:00.330) 0:00:03.868 ********* 2026-04-07 03:56:07.714716 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:07.714723 | orchestrator | 2026-04-07 03:56:07.714730 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-07 03:56:07.714737 | orchestrator | Tuesday 07 April 2026 03:56:06 +0000 (0:00:00.843) 0:00:04.712 ********* 2026-04-07 03:56:07.714744 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:07.714751 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:07.714759 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:07.714766 | orchestrator | 2026-04-07 03:56:07.714773 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-07 03:56:07.714780 | orchestrator | Tuesday 07 April 2026 03:56:07 +0000 (0:00:00.391) 0:00:05.103 ********* 2026-04-07 03:56:07.714790 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fa947a8e06c6b1a973ab368592e7f98ae6b2b36d53895a31ccf1ba52fa9ef60e', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-07 03:56:07.714802 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9ad06e93ece5f778bf86ef0dc9b2be08df5d75fd4768b09de148af7cc8f65c2b', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-07 03:56:07.714812 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b3532dfdaefc3afbdfaeb0f4e77de6fba8fba48f828e957b4bd43a56a40a6b7b', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-07 03:56:07.714821 | orchestrator | skipping: [testbed-node-3] => (item={'id': '31cafadb9c450a1efebde941142d5fdee39cf54542b54a305cffd16810cad155', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-04-07 03:56:07.714828 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c4f2337c5aa9239b1f1f10baed57ccc99346c8c70745eaf0ed3507561debc8c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-07 03:56:07.714899 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0127d79719daab7828ed6efea1e4c094184d9454a4d700f0841028fd47f6f523', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-07 03:56:07.714912 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3950a96e0cf0964e828b9d11d39ec18c4a1db3422c9c1fd15909fd85f804a6e5', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-07 03:56:07.714920 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9fcd07092f9cea85414620e08feec68c8c07a75b029740e6bed060aa9d08841f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 51 minutes (healthy)'})  2026-04-07 03:56:07.714938 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1916e423e76abdf34e2d98d7be46aa6495893aa56384b4f3b1ab13f01979caeb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.714952 | orchestrator | skipping: [testbed-node-3] => (item={'id': '397b58521f657b8bc68aa03210ef41e481820483207d784d440c401313320350', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.714962 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f569ece013ffb6dccd90672190c810e2441051b51610f8a70d8faeb2dab12d38', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.714974 | orchestrator | ok: [testbed-node-3] => (item={'id': '31ba3941485aa47de92e21795769bcf363ac5e7fed22466a26be2cb0c35147a0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-07 03:56:07.714984 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c44e84aeb1393b1bc019c087cee17d2d5bd091ae5ae83552992a3e6b2a84862f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-07 03:56:07.714992 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'db07c860f77eee302f70b7cba4e6b7bf69c3f6699c35e3219b6215458030b7bd', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.715001 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fbd2d77a506393e7aaca199a41a70e5033aa21250c2d8afd51aadaed705919a3', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-07 03:56:07.715008 | orchestrator | skipping: [testbed-node-3] => (item={'id': '34bef0e0fae19283b290f54724519f97b6c3b54852e8bd4432c71494f3e969be', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-07 03:56:07.715014 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78923adcfb00fdf0645eab7fb899e8ae7c8d02e81b09576c4ecb344a34f479d5', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:07.715020 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4f088dcfdb1435181f20710728db234ec6101436b08b1b726241f7b694ebed51', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:07.715026 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cc2577c5d8026ff7a03242bb2704bab68ed8500f17fd495c76476a6088b65db8', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:07.715032 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9cf9c5ae215ab76a81864d7cfbf86f74767b2ab1b8880d484b8e605e1ef84f2c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-07 03:56:07.715044 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e887430cd1dfd893ce6bfb07b47a5db65f7bdb0ef6c95cf8ba02e42034466268', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-07 03:56:07.977071 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ed10ab9317d710d3a99799b0333099e66e16624100e584064d19677a7238ead', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-07 03:56:07.977190 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd5798ff984bece64a2580171446b104d2f892ed4c0dd976258d658808d9cf0e1', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-04-07 03:56:07.977208 | orchestrator | skipping: [testbed-node-4] => (item={'id': '09c51de848fc23216fb4d0d504d1fab83a749a40912924497214f080766d02e0', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-07 03:56:07.977239 | orchestrator | skipping: [testbed-node-4] => (item={'id': '102bbc946e60b790c8195ad6209c8a21317048283afdd0709c1824487050cbc9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-07 03:56:07.977252 | orchestrator | skipping: [testbed-node-4] => (item={'id': '535a4d1a3513b44b0362ab1b548b159ea11cde42a4ef022e307d1bed45c0f445', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-07 03:56:07.977264 | orchestrator | skipping: [testbed-node-4] => (item={'id': '56a75a6c41d2c6566955029bc1bd4089acbc21f298c6998cc96248738c5bb6e6', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 51 minutes (healthy)'})  2026-04-07 03:56:07.977303 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f1a16fe7cebedf693e9d0fa07f669aa23e21914e6042c9229ba863e42e84188', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.977318 | orchestrator | skipping: [testbed-node-4] => (item={'id': '31d4bde3adb68b1ea4c9ec7a136ed9171e27dbd5f731065188f8c3fe475d37c6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.977332 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd9cc3ca4df17131a02a733168ca36e32aa52048a7016f158eb43dc2d90fb4871', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.977347 | orchestrator | ok: [testbed-node-4] => (item={'id': '83837dace69661a8e43d097b4aaf270616c435e926144cbba2323d9e3708e94f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-07 03:56:07.977359 | orchestrator | ok: [testbed-node-4] => (item={'id': '2ad1e175bb8f0521d56dfb81b97640ed425ebcf536cf871e202162ec5736c87d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-07 03:56:07.977372 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5cd61caa6bbe35dcfa8f4e3f625a3423d56b58dc46b3d1f1a63b342c91e54d5a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.977385 | orchestrator | skipping: [testbed-node-4] => (item={'id': '22d6a2fded2c6fa365dfa4dd7535c11fc19e60cf9ff7e550a27dbe3c8f2d66e4', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-07 03:56:07.977399 | orchestrator | skipping: [testbed-node-4] => (item={'id': '94fe758451688f9324ca7e37a604de73cc6c2a050082df90d7fe2fd9cd6aba4f', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-07 03:56:07.977443 | orchestrator | skipping: [testbed-node-4] => (item={'id': '30b25e8715ac6b8af7e794a91ee050b1b32e087416721435efd5af42917215a5', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:07.977454 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fcdf5e9dfb07094d0c18a9afee75dc1881186e6e67c4e12a841deccca78ec334', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:07.977462 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f25f2fa1c61a491a24afc8ed0b906e712b7cd080c5656c6eb9bd39cf3a562dc3', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:07.977470 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a7376fd4c3ea0153c04023d9575934cc75e80b213cf12600f74b8436a4e3d78e', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-07 03:56:07.977483 | orchestrator | skipping: [testbed-node-5] => (item={'id': '871db65117605a850f3c97ac24206257894c649ccad9dd10217723e2670bcb85', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-07 03:56:07.977496 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6dfdee3bffcb9e63b129ab3375048e44279bc48e5a6b03e7cef4edd8916cd2fd', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-07 03:56:07.977509 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8c3e2c8e59812c95875897b8ef291ddce9c73a0b0c987390bc75deca4c357a3a', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-04-07 03:56:07.977521 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b62bf5837c3cb450a810f06e91a8898ae1205f0b2f958a9cb38b5d02abb18393', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-07 03:56:07.977534 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd612c7f07d18ce39f75336f7ceeaaec3b08b0700633e5546c34719f38c419bd6', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-07 03:56:07.977547 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b31307ace06fe9134b73f5012cd0d2e0cff514542c9d56c58692adce024ee595', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-07 03:56:07.977560 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f891bc68f8960fea1af83a9f44dbdd2670ba82d631e2e784158b0ea09309e718', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 51 minutes (healthy)'})  2026-04-07 03:56:07.977573 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3b9819d06bea697b2eeefe4d8b7f264a889732d7d1aa674998e2010daf41040e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.977585 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd782129743d5a25692fab31b76ef37fe4ffc5ed388773cfe53b6c904aadb670f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.977598 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e617359fe8f0bf1dca7668e114d247e2cda305cd2240f34df9357daab058e59e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:07.977620 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a53837a5b0190eb625fd0e63d1a13933359a46e2820c257dcdec9e9fbf90010f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-07 03:56:07.977637 | orchestrator | ok: [testbed-node-5] => (item={'id': '72ebf957500d5045e1b35b86b1886a6cb733d90d3e52035e4d307bd0bb4d8d92', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-07 03:56:20.125601 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8653f7b733e7dea52b05b846cc82dbfb534b3ed39b2a52ede7fd3d9000f47db4', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-07 03:56:20.125722 | orchestrator | skipping: [testbed-node-5] => (item={'id': '94dcd459ba88df59063f4ba9735354ea2f8438f931c55ece9f9661ae46fa6e3f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-07 03:56:20.125738 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3d12c6c1376a3f5fe28d29a60aa426f213b297fca7b3783b88eb885a27d0fdc7', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-07 03:56:20.125748 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e0bc8caecec948f726eb7dd0fc5db9f2df9059e2933fcbb696136a3c772b393b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:20.125758 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dca46a9c74649bf673e5bc4e2a5d4d7b7eaebf0d58188a43c7754a69b115a8f7', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:20.125767 | orchestrator | skipping: [testbed-node-5] => (item={'id': '09105caf08f9f31d4d8aa5b2f81b929476cfa7d7758a603fc4165d1b59718eda', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-07 03:56:20.125775 | orchestrator | 2026-04-07 03:56:20.125785 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-07 03:56:20.125793 | orchestrator | Tuesday 07 April 2026 03:56:07 +0000 (0:00:00.592) 0:00:05.696 ********* 2026-04-07 03:56:20.125801 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.125809 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:20.125817 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:20.125824 | orchestrator | 2026-04-07 03:56:20.125832 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-07 03:56:20.125840 | orchestrator | Tuesday 07 April 2026 03:56:08 +0000 (0:00:00.355) 0:00:06.051 ********* 2026-04-07 03:56:20.125847 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.125856 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:20.125863 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:20.125871 | orchestrator | 2026-04-07 03:56:20.125878 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-07 03:56:20.125886 | orchestrator | Tuesday 07 April 2026 03:56:08 +0000 (0:00:00.545) 0:00:06.597 ********* 2026-04-07 03:56:20.125893 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.125901 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:20.125908 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:20.125916 | orchestrator | 2026-04-07 03:56:20.125923 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-07 03:56:20.125951 | orchestrator | Tuesday 07 April 2026 03:56:09 +0000 (0:00:00.379) 0:00:06.976 ********* 2026-04-07 03:56:20.125959 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.125966 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:20.125974 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:20.125983 | orchestrator | 2026-04-07 03:56:20.125995 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-07 03:56:20.126010 | orchestrator | Tuesday 07 April 2026 03:56:09 +0000 (0:00:00.329) 0:00:07.306 ********* 2026-04-07 03:56:20.126140 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-07 03:56:20.126157 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-07 03:56:20.126171 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126184 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-07 03:56:20.126197 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-07 03:56:20.126205 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:20.126214 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-07 03:56:20.126223 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-07 03:56:20.126231 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:20.126240 | orchestrator | 2026-04-07 03:56:20.126249 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-07 03:56:20.126258 | orchestrator | Tuesday 07 April 2026 03:56:09 +0000 (0:00:00.394) 0:00:07.701 ********* 2026-04-07 03:56:20.126294 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.126303 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:20.126312 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:20.126320 | orchestrator | 2026-04-07 03:56:20.126329 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-07 03:56:20.126337 | orchestrator | Tuesday 07 April 2026 03:56:10 +0000 (0:00:00.562) 0:00:08.263 ********* 2026-04-07 03:56:20.126346 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126373 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:20.126383 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:20.126392 | orchestrator | 2026-04-07 03:56:20.126400 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-07 03:56:20.126409 | orchestrator | Tuesday 07 April 2026 03:56:10 +0000 (0:00:00.307) 0:00:08.570 ********* 2026-04-07 03:56:20.126417 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126425 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:20.126434 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:20.126442 | orchestrator | 2026-04-07 03:56:20.126450 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-07 03:56:20.126459 | orchestrator | Tuesday 07 April 2026 03:56:11 +0000 (0:00:00.345) 0:00:08.916 ********* 2026-04-07 03:56:20.126468 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.126477 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:20.126485 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:20.126494 | orchestrator | 2026-04-07 03:56:20.126502 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-07 03:56:20.126512 | orchestrator | Tuesday 07 April 2026 03:56:11 +0000 (0:00:00.539) 0:00:09.456 ********* 2026-04-07 03:56:20.126520 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126528 | orchestrator | 2026-04-07 03:56:20.126535 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-07 03:56:20.126548 | orchestrator | Tuesday 07 April 2026 03:56:11 +0000 (0:00:00.269) 0:00:09.726 ********* 2026-04-07 03:56:20.126555 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126562 | orchestrator | 2026-04-07 03:56:20.126570 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-07 03:56:20.126587 | orchestrator | Tuesday 07 April 2026 03:56:12 +0000 (0:00:00.272) 0:00:09.999 ********* 2026-04-07 03:56:20.126594 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126602 | orchestrator | 2026-04-07 03:56:20.126609 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:56:20.126616 | orchestrator | Tuesday 07 April 2026 03:56:12 +0000 (0:00:00.270) 0:00:10.269 ********* 2026-04-07 03:56:20.126624 | orchestrator | 2026-04-07 03:56:20.126631 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:56:20.126638 | orchestrator | Tuesday 07 April 2026 03:56:12 +0000 (0:00:00.071) 0:00:10.341 ********* 2026-04-07 03:56:20.126645 | orchestrator | 2026-04-07 03:56:20.126653 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:56:20.126660 | orchestrator | Tuesday 07 April 2026 03:56:12 +0000 (0:00:00.071) 0:00:10.413 ********* 2026-04-07 03:56:20.126667 | orchestrator | 2026-04-07 03:56:20.126675 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-07 03:56:20.126682 | orchestrator | Tuesday 07 April 2026 03:56:12 +0000 (0:00:00.073) 0:00:10.486 ********* 2026-04-07 03:56:20.126689 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126697 | orchestrator | 2026-04-07 03:56:20.126704 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-07 03:56:20.126711 | orchestrator | Tuesday 07 April 2026 03:56:13 +0000 (0:00:00.257) 0:00:10.744 ********* 2026-04-07 03:56:20.126719 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126726 | orchestrator | 2026-04-07 03:56:20.126734 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-07 03:56:20.126741 | orchestrator | Tuesday 07 April 2026 03:56:13 +0000 (0:00:00.260) 0:00:11.004 ********* 2026-04-07 03:56:20.126748 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.126756 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:20.126763 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:20.126770 | orchestrator | 2026-04-07 03:56:20.126778 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-07 03:56:20.126785 | orchestrator | Tuesday 07 April 2026 03:56:13 +0000 (0:00:00.376) 0:00:11.380 ********* 2026-04-07 03:56:20.126792 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.126799 | orchestrator | 2026-04-07 03:56:20.126807 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-07 03:56:20.126814 | orchestrator | Tuesday 07 April 2026 03:56:14 +0000 (0:00:00.754) 0:00:12.134 ********* 2026-04-07 03:56:20.126822 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 03:56:20.126829 | orchestrator | 2026-04-07 03:56:20.126837 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-07 03:56:20.126844 | orchestrator | Tuesday 07 April 2026 03:56:16 +0000 (0:00:01.805) 0:00:13.940 ********* 2026-04-07 03:56:20.126851 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.126859 | orchestrator | 2026-04-07 03:56:20.126866 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-07 03:56:20.126873 | orchestrator | Tuesday 07 April 2026 03:56:16 +0000 (0:00:00.137) 0:00:14.077 ********* 2026-04-07 03:56:20.126880 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.126888 | orchestrator | 2026-04-07 03:56:20.126895 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-07 03:56:20.126902 | orchestrator | Tuesday 07 April 2026 03:56:16 +0000 (0:00:00.341) 0:00:14.419 ********* 2026-04-07 03:56:20.126910 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:20.126917 | orchestrator | 2026-04-07 03:56:20.126924 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-07 03:56:20.126932 | orchestrator | Tuesday 07 April 2026 03:56:16 +0000 (0:00:00.124) 0:00:14.544 ********* 2026-04-07 03:56:20.126939 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.126946 | orchestrator | 2026-04-07 03:56:20.126953 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-07 03:56:20.126961 | orchestrator | Tuesday 07 April 2026 03:56:16 +0000 (0:00:00.143) 0:00:14.688 ********* 2026-04-07 03:56:20.126973 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:20.126980 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:20.126987 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:20.126995 | orchestrator | 2026-04-07 03:56:20.127002 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-07 03:56:20.127009 | orchestrator | Tuesday 07 April 2026 03:56:17 +0000 (0:00:00.348) 0:00:15.036 ********* 2026-04-07 03:56:20.127017 | orchestrator | changed: [testbed-node-3] 2026-04-07 03:56:20.127024 | orchestrator | changed: [testbed-node-5] 2026-04-07 03:56:20.127031 | orchestrator | changed: [testbed-node-4] 2026-04-07 03:56:31.817783 | orchestrator | 2026-04-07 03:56:31.817919 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-07 03:56:31.817948 | orchestrator | Tuesday 07 April 2026 03:56:20 +0000 (0:00:02.809) 0:00:17.845 ********* 2026-04-07 03:56:31.817969 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:31.817991 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:31.818011 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:31.818092 | orchestrator | 2026-04-07 03:56:31.818137 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-07 03:56:31.818149 | orchestrator | Tuesday 07 April 2026 03:56:20 +0000 (0:00:00.414) 0:00:18.260 ********* 2026-04-07 03:56:31.818160 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:31.818171 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:31.818182 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:31.818193 | orchestrator | 2026-04-07 03:56:31.818204 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-07 03:56:31.818216 | orchestrator | Tuesday 07 April 2026 03:56:21 +0000 (0:00:00.594) 0:00:18.855 ********* 2026-04-07 03:56:31.818227 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:31.818240 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:31.818330 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:31.818349 | orchestrator | 2026-04-07 03:56:31.818363 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-07 03:56:31.818392 | orchestrator | Tuesday 07 April 2026 03:56:21 +0000 (0:00:00.374) 0:00:19.229 ********* 2026-04-07 03:56:31.818406 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:31.818419 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:31.818432 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:31.818445 | orchestrator | 2026-04-07 03:56:31.818459 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-07 03:56:31.818471 | orchestrator | Tuesday 07 April 2026 03:56:22 +0000 (0:00:00.605) 0:00:19.835 ********* 2026-04-07 03:56:31.818483 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:31.818494 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:31.818505 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:31.818516 | orchestrator | 2026-04-07 03:56:31.818527 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-07 03:56:31.818539 | orchestrator | Tuesday 07 April 2026 03:56:22 +0000 (0:00:00.314) 0:00:20.149 ********* 2026-04-07 03:56:31.818550 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:31.818561 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:31.818572 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:31.818583 | orchestrator | 2026-04-07 03:56:31.818595 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-07 03:56:31.818606 | orchestrator | Tuesday 07 April 2026 03:56:22 +0000 (0:00:00.329) 0:00:20.478 ********* 2026-04-07 03:56:31.818617 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:31.818628 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:31.818639 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:31.818650 | orchestrator | 2026-04-07 03:56:31.818661 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-07 03:56:31.818673 | orchestrator | Tuesday 07 April 2026 03:56:23 +0000 (0:00:00.589) 0:00:21.067 ********* 2026-04-07 03:56:31.818684 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:31.818695 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:31.818726 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:31.818737 | orchestrator | 2026-04-07 03:56:31.818749 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-07 03:56:31.818760 | orchestrator | Tuesday 07 April 2026 03:56:24 +0000 (0:00:00.868) 0:00:21.935 ********* 2026-04-07 03:56:31.818771 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:31.818782 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:31.818793 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:31.818804 | orchestrator | 2026-04-07 03:56:31.818815 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-07 03:56:31.818829 | orchestrator | Tuesday 07 April 2026 03:56:24 +0000 (0:00:00.366) 0:00:22.301 ********* 2026-04-07 03:56:31.818849 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:31.818867 | orchestrator | skipping: [testbed-node-4] 2026-04-07 03:56:31.818885 | orchestrator | skipping: [testbed-node-5] 2026-04-07 03:56:31.818904 | orchestrator | 2026-04-07 03:56:31.818924 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-07 03:56:31.818943 | orchestrator | Tuesday 07 April 2026 03:56:24 +0000 (0:00:00.347) 0:00:22.649 ********* 2026-04-07 03:56:31.818962 | orchestrator | ok: [testbed-node-3] 2026-04-07 03:56:31.818978 | orchestrator | ok: [testbed-node-4] 2026-04-07 03:56:31.818990 | orchestrator | ok: [testbed-node-5] 2026-04-07 03:56:31.819001 | orchestrator | 2026-04-07 03:56:31.819012 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-07 03:56:31.819023 | orchestrator | Tuesday 07 April 2026 03:56:25 +0000 (0:00:00.580) 0:00:23.229 ********* 2026-04-07 03:56:31.819034 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 03:56:31.819046 | orchestrator | 2026-04-07 03:56:31.819057 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-07 03:56:31.819067 | orchestrator | Tuesday 07 April 2026 03:56:25 +0000 (0:00:00.281) 0:00:23.510 ********* 2026-04-07 03:56:31.819078 | orchestrator | skipping: [testbed-node-3] 2026-04-07 03:56:31.819089 | orchestrator | 2026-04-07 03:56:31.819100 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-07 03:56:31.819119 | orchestrator | Tuesday 07 April 2026 03:56:26 +0000 (0:00:00.279) 0:00:23.790 ********* 2026-04-07 03:56:31.819136 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 03:56:31.819151 | orchestrator | 2026-04-07 03:56:31.819168 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-07 03:56:31.819184 | orchestrator | Tuesday 07 April 2026 03:56:27 +0000 (0:00:01.822) 0:00:25.612 ********* 2026-04-07 03:56:31.819201 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 03:56:31.819217 | orchestrator | 2026-04-07 03:56:31.819235 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-07 03:56:31.819280 | orchestrator | Tuesday 07 April 2026 03:56:28 +0000 (0:00:00.301) 0:00:25.913 ********* 2026-04-07 03:56:31.819301 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 03:56:31.819319 | orchestrator | 2026-04-07 03:56:31.819363 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:56:31.819385 | orchestrator | Tuesday 07 April 2026 03:56:28 +0000 (0:00:00.287) 0:00:26.201 ********* 2026-04-07 03:56:31.819403 | orchestrator | 2026-04-07 03:56:31.819421 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:56:31.819432 | orchestrator | Tuesday 07 April 2026 03:56:28 +0000 (0:00:00.074) 0:00:26.276 ********* 2026-04-07 03:56:31.819443 | orchestrator | 2026-04-07 03:56:31.819454 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-07 03:56:31.819465 | orchestrator | Tuesday 07 April 2026 03:56:28 +0000 (0:00:00.074) 0:00:26.350 ********* 2026-04-07 03:56:31.819476 | orchestrator | 2026-04-07 03:56:31.819487 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-07 03:56:31.819498 | orchestrator | Tuesday 07 April 2026 03:56:28 +0000 (0:00:00.082) 0:00:26.433 ********* 2026-04-07 03:56:31.819524 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-07 03:56:31.819536 | orchestrator | 2026-04-07 03:56:31.819547 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-07 03:56:31.819558 | orchestrator | Tuesday 07 April 2026 03:56:30 +0000 (0:00:01.755) 0:00:28.188 ********* 2026-04-07 03:56:31.819578 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-07 03:56:31.819589 | orchestrator |  "msg": [ 2026-04-07 03:56:31.819601 | orchestrator |  "Validator run completed.", 2026-04-07 03:56:31.819612 | orchestrator |  "You can find the report file here:", 2026-04-07 03:56:31.819623 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-07T03:56:03+00:00-report.json", 2026-04-07 03:56:31.819636 | orchestrator |  "on the following host:", 2026-04-07 03:56:31.819647 | orchestrator |  "testbed-manager" 2026-04-07 03:56:31.819658 | orchestrator |  ] 2026-04-07 03:56:31.819670 | orchestrator | } 2026-04-07 03:56:31.819681 | orchestrator | 2026-04-07 03:56:31.819692 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:56:31.819718 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 03:56:31.819732 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-07 03:56:31.819743 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-07 03:56:31.819755 | orchestrator | 2026-04-07 03:56:31.819766 | orchestrator | 2026-04-07 03:56:31.819777 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:56:31.819788 | orchestrator | Tuesday 07 April 2026 03:56:31 +0000 (0:00:00.962) 0:00:29.150 ********* 2026-04-07 03:56:31.819799 | orchestrator | =============================================================================== 2026-04-07 03:56:31.819811 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.81s 2026-04-07 03:56:31.819822 | orchestrator | Aggregate test results step one ----------------------------------------- 1.82s 2026-04-07 03:56:31.819840 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.81s 2026-04-07 03:56:31.819859 | orchestrator | Write report file ------------------------------------------------------- 1.76s 2026-04-07 03:56:31.819876 | orchestrator | Print report file information ------------------------------------------- 0.96s 2026-04-07 03:56:31.819894 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2026-04-07 03:56:31.819914 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.87s 2026-04-07 03:56:31.819933 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.84s 2026-04-07 03:56:31.819952 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2026-04-07 03:56:31.819970 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.75s 2026-04-07 03:56:31.819990 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.61s 2026-04-07 03:56:31.820008 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.59s 2026-04-07 03:56:31.820026 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.59s 2026-04-07 03:56:31.820038 | orchestrator | Prepare test data ------------------------------------------------------- 0.59s 2026-04-07 03:56:31.820049 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.58s 2026-04-07 03:56:31.820060 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.56s 2026-04-07 03:56:31.820071 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.56s 2026-04-07 03:56:31.820082 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.55s 2026-04-07 03:56:31.820107 | orchestrator | Set test result to passed if all containers are running ----------------- 0.54s 2026-04-07 03:56:31.820126 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.41s 2026-04-07 03:56:32.214627 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-07 03:56:32.219974 | orchestrator | + set -e 2026-04-07 03:56:32.220094 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 03:56:32.221467 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 03:56:32.221536 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 03:56:32.221545 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 03:56:32.221553 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 03:56:32.221558 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 03:56:32.221564 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 03:56:32.221570 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 03:56:32.221577 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 03:56:32.221584 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 03:56:32.221594 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 03:56:32.221601 | orchestrator | ++ export ARA=false 2026-04-07 03:56:32.221608 | orchestrator | ++ ARA=false 2026-04-07 03:56:32.221614 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 03:56:32.221619 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 03:56:32.221626 | orchestrator | ++ export TEMPEST=false 2026-04-07 03:56:32.221632 | orchestrator | ++ TEMPEST=false 2026-04-07 03:56:32.221638 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 03:56:32.221644 | orchestrator | ++ IS_ZUUL=true 2026-04-07 03:56:32.221649 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 03:56:32.221656 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 03:56:32.221661 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 03:56:32.221667 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 03:56:32.221673 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 03:56:32.221678 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 03:56:32.221685 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 03:56:32.221691 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 03:56:32.221698 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 03:56:32.221704 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 03:56:32.221710 | orchestrator | + source /etc/os-release 2026-04-07 03:56:32.221716 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-07 03:56:32.221722 | orchestrator | ++ NAME=Ubuntu 2026-04-07 03:56:32.221729 | orchestrator | ++ VERSION_ID=24.04 2026-04-07 03:56:32.221735 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-07 03:56:32.221739 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-07 03:56:32.221743 | orchestrator | ++ ID=ubuntu 2026-04-07 03:56:32.221747 | orchestrator | ++ ID_LIKE=debian 2026-04-07 03:56:32.221751 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-07 03:56:32.221755 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-07 03:56:32.221760 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-07 03:56:32.221764 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-07 03:56:32.221768 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-07 03:56:32.221772 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-07 03:56:32.221776 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-07 03:56:32.221781 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-07 03:56:32.221787 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-07 03:56:32.245027 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-07 03:56:58.018152 | orchestrator | 2026-04-07 03:56:58.018264 | orchestrator | # Status of Elasticsearch 2026-04-07 03:56:58.018273 | orchestrator | 2026-04-07 03:56:58.018278 | orchestrator | + pushd /opt/configuration/contrib 2026-04-07 03:56:58.018284 | orchestrator | + echo 2026-04-07 03:56:58.018288 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-07 03:56:58.018293 | orchestrator | + echo 2026-04-07 03:56:58.018297 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-07 03:56:58.218955 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-07 03:56:58.219264 | orchestrator | 2026-04-07 03:56:58.219285 | orchestrator | + echo 2026-04-07 03:56:58.219290 | orchestrator | + echo '# Status of MariaDB' 2026-04-07 03:56:58.219346 | orchestrator | # Status of MariaDB 2026-04-07 03:56:58.219353 | orchestrator | 2026-04-07 03:56:58.219357 | orchestrator | + echo 2026-04-07 03:56:58.221448 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-07 03:56:58.289213 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 03:56:58.289342 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-07 03:56:58.289352 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-07 03:56:58.289358 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-07 03:56:58.343825 | orchestrator | Reading package lists... 2026-04-07 03:56:58.732682 | orchestrator | Building dependency tree... 2026-04-07 03:56:58.733291 | orchestrator | Reading state information... 2026-04-07 03:56:59.316494 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-07 03:56:59.316578 | orchestrator | bc set to manually installed. 2026-04-07 03:56:59.316585 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-04-07 03:57:00.096730 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-07 03:57:00.097426 | orchestrator | 2026-04-07 03:57:00.097462 | orchestrator | # Status of Prometheus 2026-04-07 03:57:00.097471 | orchestrator | 2026-04-07 03:57:00.097480 | orchestrator | + echo 2026-04-07 03:57:00.097489 | orchestrator | + echo '# Status of Prometheus' 2026-04-07 03:57:00.097497 | orchestrator | + echo 2026-04-07 03:57:00.097506 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-07 03:57:00.157532 | orchestrator | Unauthorized 2026-04-07 03:57:00.160358 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-07 03:57:00.220692 | orchestrator | Unauthorized 2026-04-07 03:57:00.225193 | orchestrator | 2026-04-07 03:57:00.225328 | orchestrator | # Status of RabbitMQ 2026-04-07 03:57:00.225348 | orchestrator | 2026-04-07 03:57:00.225369 | orchestrator | + echo 2026-04-07 03:57:00.225383 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-07 03:57:00.225397 | orchestrator | + echo 2026-04-07 03:57:00.226389 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-07 03:57:00.289002 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 03:57:00.289075 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-07 03:57:00.289083 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-07 03:57:00.821668 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-07 03:57:00.833497 | orchestrator | 2026-04-07 03:57:00.833577 | orchestrator | # Status of Redis 2026-04-07 03:57:00.833584 | orchestrator | 2026-04-07 03:57:00.833589 | orchestrator | + echo 2026-04-07 03:57:00.833594 | orchestrator | + echo '# Status of Redis' 2026-04-07 03:57:00.833598 | orchestrator | + echo 2026-04-07 03:57:00.833604 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-07 03:57:00.842375 | orchestrator | TCP OK - 0.004 second response time on 192.168.16.10 port 6379|time=0.003901s;;;0.000000;10.000000 2026-04-07 03:57:00.843138 | orchestrator | 2026-04-07 03:57:00.843180 | orchestrator | # Create backup of MariaDB database 2026-04-07 03:57:00.843191 | orchestrator | 2026-04-07 03:57:00.843199 | orchestrator | + popd 2026-04-07 03:57:00.843207 | orchestrator | + echo 2026-04-07 03:57:00.843214 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-07 03:57:00.843222 | orchestrator | + echo 2026-04-07 03:57:00.843274 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-07 03:57:03.038373 | orchestrator | 2026-04-07 03:57:03 | INFO  | Task 0fa3e6a6-d197-4cf3-b356-1cae9c43c890 (mariadb_backup) was prepared for execution. 2026-04-07 03:57:03.038457 | orchestrator | 2026-04-07 03:57:03 | INFO  | It takes a moment until task 0fa3e6a6-d197-4cf3-b356-1cae9c43c890 (mariadb_backup) has been started and output is visible here. 2026-04-07 03:58:43.134218 | orchestrator | 2026-04-07 03:58:43.134313 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 03:58:43.134327 | orchestrator | 2026-04-07 03:58:43.134349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 03:58:43.134359 | orchestrator | Tuesday 07 April 2026 03:57:07 +0000 (0:00:00.198) 0:00:00.198 ********* 2026-04-07 03:58:43.134365 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:58:43.134389 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:58:43.134396 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:58:43.134403 | orchestrator | 2026-04-07 03:58:43.134410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 03:58:43.134417 | orchestrator | Tuesday 07 April 2026 03:57:08 +0000 (0:00:00.384) 0:00:00.583 ********* 2026-04-07 03:58:43.134425 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-07 03:58:43.134433 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-07 03:58:43.134439 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-07 03:58:43.134446 | orchestrator | 2026-04-07 03:58:43.134453 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-07 03:58:43.134461 | orchestrator | 2026-04-07 03:58:43.134465 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-07 03:58:43.134470 | orchestrator | Tuesday 07 April 2026 03:57:08 +0000 (0:00:00.655) 0:00:01.239 ********* 2026-04-07 03:58:43.134476 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 03:58:43.134483 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 03:58:43.134490 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 03:58:43.134497 | orchestrator | 2026-04-07 03:58:43.134504 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 03:58:43.134514 | orchestrator | Tuesday 07 April 2026 03:57:09 +0000 (0:00:00.461) 0:00:01.700 ********* 2026-04-07 03:58:43.134523 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 03:58:43.134531 | orchestrator | 2026-04-07 03:58:43.134538 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-07 03:58:43.134545 | orchestrator | Tuesday 07 April 2026 03:57:10 +0000 (0:00:00.701) 0:00:02.402 ********* 2026-04-07 03:58:43.134553 | orchestrator | ok: [testbed-node-2] 2026-04-07 03:58:43.134560 | orchestrator | ok: [testbed-node-0] 2026-04-07 03:58:43.134566 | orchestrator | ok: [testbed-node-1] 2026-04-07 03:58:43.134574 | orchestrator | 2026-04-07 03:58:43.134580 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-07 03:58:43.134587 | orchestrator | Tuesday 07 April 2026 03:57:13 +0000 (0:00:03.605) 0:00:06.007 ********* 2026-04-07 03:58:43.134594 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-07 03:58:43.134601 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-07 03:58:43.134608 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-07 03:58:43.134615 | orchestrator | mariadb_bootstrap_restart 2026-04-07 03:58:43.134623 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:58:43.134629 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:58:43.134636 | orchestrator | changed: [testbed-node-0] 2026-04-07 03:58:43.134643 | orchestrator | 2026-04-07 03:58:43.134650 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-07 03:58:43.134657 | orchestrator | skipping: no hosts matched 2026-04-07 03:58:43.134664 | orchestrator | 2026-04-07 03:58:43.134670 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-07 03:58:43.134677 | orchestrator | skipping: no hosts matched 2026-04-07 03:58:43.134684 | orchestrator | 2026-04-07 03:58:43.134690 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-07 03:58:43.134697 | orchestrator | skipping: no hosts matched 2026-04-07 03:58:43.134703 | orchestrator | 2026-04-07 03:58:43.134710 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-07 03:58:43.134716 | orchestrator | 2026-04-07 03:58:43.134723 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-07 03:58:43.134731 | orchestrator | Tuesday 07 April 2026 03:58:41 +0000 (0:01:28.257) 0:01:34.265 ********* 2026-04-07 03:58:43.134738 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:58:43.134745 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:58:43.134758 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:58:43.134765 | orchestrator | 2026-04-07 03:58:43.134772 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-07 03:58:43.134780 | orchestrator | Tuesday 07 April 2026 03:58:42 +0000 (0:00:00.362) 0:01:34.628 ********* 2026-04-07 03:58:43.134787 | orchestrator | skipping: [testbed-node-0] 2026-04-07 03:58:43.134794 | orchestrator | skipping: [testbed-node-1] 2026-04-07 03:58:43.134801 | orchestrator | skipping: [testbed-node-2] 2026-04-07 03:58:43.134808 | orchestrator | 2026-04-07 03:58:43.134854 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 03:58:43.134863 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 03:58:43.134872 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 03:58:43.134879 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 03:58:43.134886 | orchestrator | 2026-04-07 03:58:43.134893 | orchestrator | 2026-04-07 03:58:43.134899 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 03:58:43.134906 | orchestrator | Tuesday 07 April 2026 03:58:42 +0000 (0:00:00.426) 0:01:35.055 ********* 2026-04-07 03:58:43.134912 | orchestrator | =============================================================================== 2026-04-07 03:58:43.134919 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 88.26s 2026-04-07 03:58:43.134941 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.61s 2026-04-07 03:58:43.134949 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.70s 2026-04-07 03:58:43.134957 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2026-04-07 03:58:43.134964 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.46s 2026-04-07 03:58:43.134971 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.43s 2026-04-07 03:58:43.134978 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-04-07 03:58:43.134985 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.36s 2026-04-07 03:58:43.535678 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-07 03:58:43.545220 | orchestrator | + set -e 2026-04-07 03:58:43.545310 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 03:58:43.545596 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 03:58:43.545735 | orchestrator | ++ INTERACTIVE=false 2026-04-07 03:58:43.545753 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 03:58:43.545764 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 03:58:43.545776 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-07 03:58:43.548022 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-07 03:58:43.556015 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 03:58:43.556274 | orchestrator | 2026-04-07 03:58:43.556294 | orchestrator | # OpenStack endpoints 2026-04-07 03:58:43.556306 | orchestrator | 2026-04-07 03:58:43.556317 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 03:58:43.556328 | orchestrator | + export OS_CLOUD=admin 2026-04-07 03:58:43.556339 | orchestrator | + OS_CLOUD=admin 2026-04-07 03:58:43.556351 | orchestrator | + echo 2026-04-07 03:58:43.556362 | orchestrator | + echo '# OpenStack endpoints' 2026-04-07 03:58:43.556373 | orchestrator | + echo 2026-04-07 03:58:43.556389 | orchestrator | + openstack endpoint list 2026-04-07 03:58:47.038905 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-07 03:58:47.039024 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-07 03:58:47.039038 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-07 03:58:47.039067 | orchestrator | | 02cc628fe5634fe29f85d80667301c47 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-07 03:58:47.039076 | orchestrator | | 08cff53cae7d4e00a9c1787bbb879a88 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-07 03:58:47.039084 | orchestrator | | 0e0f67ddd2114446a8f50e65ad5d7355 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-07 03:58:47.039092 | orchestrator | | 118323344f45403fb9480d70926dd5ca | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-07 03:58:47.039100 | orchestrator | | 17e8f73c751a410b97410a0edc3ada29 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-07 03:58:47.039109 | orchestrator | | 2d98fcb8e88d4ce58aface8110037e67 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-07 03:58:47.039117 | orchestrator | | 30105efd9c234d818673380c5f45a321 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-07 03:58:47.039125 | orchestrator | | 3434c5f7175a4ce98f461d3771727ef0 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-07 03:58:47.039133 | orchestrator | | 38a825c2a5f64436be6befb412da8788 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-07 03:58:47.039180 | orchestrator | | 4815f9350dd54781ad111b82da469db3 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-07 03:58:47.039190 | orchestrator | | 494809daddfb4c3e8f79282a8060f9df | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-07 03:58:47.039198 | orchestrator | | 5472a61d5b254d098108d1f3090f6567 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-07 03:58:47.039207 | orchestrator | | 58343172fadf4e92a40e7be97fd06b50 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-07 03:58:47.039215 | orchestrator | | 5a99cbf1bd114461b15b41daae6c8a0a | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-07 03:58:47.039223 | orchestrator | | 626e184b0e2744849f871e7b8b7d9a9d | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-07 03:58:47.039231 | orchestrator | | 69d4721cf00542e9a3c551551d6d0455 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-07 03:58:47.039244 | orchestrator | | 7975fa33a845489a8d8785f732a91f96 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-07 03:58:47.039257 | orchestrator | | 7b9e9e9c6d594f8f8a571808905232f3 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-07 03:58:47.039269 | orchestrator | | 81290d3b61d14885a475f28f4720b5b5 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-07 03:58:47.039282 | orchestrator | | 8b7d6859de414b52a9c9b9c57764969b | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-07 03:58:47.039325 | orchestrator | | 8cbdb5d513ad4950b8d82bc8dcad9545 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-07 03:58:47.039349 | orchestrator | | a1788c5a813841048450b22f0f65234f | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-07 03:58:47.039358 | orchestrator | | b2c0a356bcf04323858655abf854e2d3 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-07 03:58:47.039367 | orchestrator | | b3d3541eeb714c66a8a79a19d2d9be5f | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-07 03:58:47.039375 | orchestrator | | b9a1b5e58f7d42518338463533747071 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-07 03:58:47.039385 | orchestrator | | d75b17cf718d476e83f54c17ba7767bd | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-07 03:58:47.039395 | orchestrator | | ddab7a14c663463d8f9d59ec068b1f76 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-07 03:58:47.039405 | orchestrator | | e27a94b63f554d75830efe9852afb19e | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-07 03:58:47.039414 | orchestrator | | fac5283131d94e27be85816973f2fddc | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-07 03:58:47.039424 | orchestrator | | fe93e33f90b247bc9a5d3b6c5ec5a5b4 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-07 03:58:47.039433 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-07 03:58:47.372296 | orchestrator | 2026-04-07 03:58:47.372380 | orchestrator | # Cinder 2026-04-07 03:58:47.372391 | orchestrator | 2026-04-07 03:58:47.372400 | orchestrator | + echo 2026-04-07 03:58:47.372407 | orchestrator | + echo '# Cinder' 2026-04-07 03:58:47.372415 | orchestrator | + echo 2026-04-07 03:58:47.372423 | orchestrator | + openstack volume service list 2026-04-07 03:58:50.172100 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-07 03:58:50.172312 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-07 03:58:50.172342 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-07 03:58:50.172381 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-07T03:58:41.000000 | 2026-04-07 03:58:50.172401 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-07T03:58:41.000000 | 2026-04-07 03:58:50.172413 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-07T03:58:41.000000 | 2026-04-07 03:58:50.172425 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-07T03:58:40.000000 | 2026-04-07 03:58:50.172436 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-07T03:58:46.000000 | 2026-04-07 03:58:50.172447 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-07T03:58:47.000000 | 2026-04-07 03:58:50.172459 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-07T03:58:44.000000 | 2026-04-07 03:58:50.172470 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-07T03:58:45.000000 | 2026-04-07 03:58:50.172510 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-07T03:58:46.000000 | 2026-04-07 03:58:50.172522 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-07 03:58:50.547814 | orchestrator | 2026-04-07 03:58:50.547903 | orchestrator | # Neutron 2026-04-07 03:58:50.547914 | orchestrator | 2026-04-07 03:58:50.547921 | orchestrator | + echo 2026-04-07 03:58:50.547928 | orchestrator | + echo '# Neutron' 2026-04-07 03:58:50.547935 | orchestrator | + echo 2026-04-07 03:58:50.547941 | orchestrator | + openstack network agent list 2026-04-07 03:58:53.490009 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-07 03:58:53.490273 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-07 03:58:53.490293 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-07 03:58:53.490304 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-07 03:58:53.490314 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-07 03:58:53.490341 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-07 03:58:53.490351 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-07 03:58:53.490360 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-07 03:58:53.490368 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-07 03:58:53.490377 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-07 03:58:53.490386 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-07 03:58:53.490395 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-07 03:58:53.490404 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-07 03:58:53.818203 | orchestrator | + openstack network service provider list 2026-04-07 03:58:56.598076 | orchestrator | +---------------+------+---------+ 2026-04-07 03:58:56.598263 | orchestrator | | Service Type | Name | Default | 2026-04-07 03:58:56.598279 | orchestrator | +---------------+------+---------+ 2026-04-07 03:58:56.598289 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-07 03:58:56.598299 | orchestrator | +---------------+------+---------+ 2026-04-07 03:58:56.922494 | orchestrator | 2026-04-07 03:58:56.922588 | orchestrator | # Nova 2026-04-07 03:58:56.922604 | orchestrator | 2026-04-07 03:58:56.922617 | orchestrator | + echo 2026-04-07 03:58:56.922628 | orchestrator | + echo '# Nova' 2026-04-07 03:58:56.922639 | orchestrator | + echo 2026-04-07 03:58:56.922651 | orchestrator | + openstack compute service list 2026-04-07 03:58:59.796574 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-07 03:58:59.796679 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-07 03:58:59.796695 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-07 03:58:59.796736 | orchestrator | | c07c92ef-9cab-437c-8c28-8b0289466372 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-07T03:58:49.000000 | 2026-04-07 03:58:59.796749 | orchestrator | | 734f4e51-886e-47ca-8020-8c9197ab8d4a | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-07T03:58:53.000000 | 2026-04-07 03:58:59.796760 | orchestrator | | 458b5620-ce8a-4363-a030-66845909dd35 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-07T03:58:54.000000 | 2026-04-07 03:58:59.796772 | orchestrator | | e856dd98-32e8-40a2-9d4f-8f1e872f6213 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-07T03:58:57.000000 | 2026-04-07 03:58:59.796783 | orchestrator | | f83fc95e-dd4c-4555-b34a-48807d3a264d | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-07T03:58:59.000000 | 2026-04-07 03:58:59.796794 | orchestrator | | 05bd293a-1872-4698-a54d-ceba4d3a0756 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-07T03:58:49.000000 | 2026-04-07 03:58:59.796805 | orchestrator | | 1adf695b-098a-433d-85cd-19e990610bb6 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-07T03:58:51.000000 | 2026-04-07 03:58:59.796816 | orchestrator | | 6c586445-a116-4383-b227-aa057c320682 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-07T03:58:51.000000 | 2026-04-07 03:58:59.796827 | orchestrator | | c1ef06dc-dda6-4ff1-8753-e70d0c8770f7 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-07T03:58:51.000000 | 2026-04-07 03:58:59.796838 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-07 03:59:00.131608 | orchestrator | + openstack hypervisor list 2026-04-07 03:59:03.034656 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-07 03:59:03.034768 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-07 03:59:03.034776 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-07 03:59:03.034783 | orchestrator | | 314b5bd4-9738-476c-b28e-b1ebbab6301c | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-07 03:59:03.034790 | orchestrator | | 23ecce4d-2364-4644-adc7-7c0ce2533eb9 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-07 03:59:03.034796 | orchestrator | | 661591df-49ea-4b69-900d-37be6b9c5fe1 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-07 03:59:03.034803 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-07 03:59:03.355907 | orchestrator | 2026-04-07 03:59:03.356074 | orchestrator | # Run OpenStack test play 2026-04-07 03:59:03.356118 | orchestrator | 2026-04-07 03:59:03.356163 | orchestrator | + echo 2026-04-07 03:59:03.356182 | orchestrator | + echo '# Run OpenStack test play' 2026-04-07 03:59:03.356205 | orchestrator | + echo 2026-04-07 03:59:03.356221 | orchestrator | + osism apply --environment openstack test 2026-04-07 03:59:05.490439 | orchestrator | 2026-04-07 03:59:05 | INFO  | Trying to run play test in environment openstack 2026-04-07 03:59:15.646324 | orchestrator | 2026-04-07 03:59:15 | INFO  | Task da15db24-ddcc-489f-b341-5874ada9bf73 (test) was prepared for execution. 2026-04-07 03:59:15.646457 | orchestrator | 2026-04-07 03:59:15 | INFO  | It takes a moment until task da15db24-ddcc-489f-b341-5874ada9bf73 (test) has been started and output is visible here. 2026-04-07 04:02:47.740844 | orchestrator | 2026-04-07 04:02:47.740925 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-07 04:02:47.740932 | orchestrator | 2026-04-07 04:02:47.740937 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-07 04:02:47.740942 | orchestrator | Tuesday 07 April 2026 03:59:20 +0000 (0:00:00.082) 0:00:00.082 ********* 2026-04-07 04:02:47.740946 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.740990 | orchestrator | 2026-04-07 04:02:47.740996 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-07 04:02:47.741000 | orchestrator | Tuesday 07 April 2026 03:59:24 +0000 (0:00:04.166) 0:00:04.248 ********* 2026-04-07 04:02:47.741018 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741022 | orchestrator | 2026-04-07 04:02:47.741026 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-07 04:02:47.741030 | orchestrator | Tuesday 07 April 2026 03:59:29 +0000 (0:00:04.580) 0:00:08.829 ********* 2026-04-07 04:02:47.741034 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741038 | orchestrator | 2026-04-07 04:02:47.741042 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-07 04:02:47.741046 | orchestrator | Tuesday 07 April 2026 03:59:36 +0000 (0:00:07.511) 0:00:16.340 ********* 2026-04-07 04:02:47.741050 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741054 | orchestrator | 2026-04-07 04:02:47.741058 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-07 04:02:47.741062 | orchestrator | Tuesday 07 April 2026 03:59:40 +0000 (0:00:04.460) 0:00:20.801 ********* 2026-04-07 04:02:47.741065 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741069 | orchestrator | 2026-04-07 04:02:47.741073 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-07 04:02:47.741077 | orchestrator | Tuesday 07 April 2026 03:59:45 +0000 (0:00:04.649) 0:00:25.450 ********* 2026-04-07 04:02:47.741081 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-07 04:02:47.741085 | orchestrator | changed: [localhost] => (item=member) 2026-04-07 04:02:47.741091 | orchestrator | changed: [localhost] => (item=creator) 2026-04-07 04:02:47.741095 | orchestrator | 2026-04-07 04:02:47.741099 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-07 04:02:47.741103 | orchestrator | Tuesday 07 April 2026 03:59:58 +0000 (0:00:12.661) 0:00:38.111 ********* 2026-04-07 04:02:47.741106 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741110 | orchestrator | 2026-04-07 04:02:47.741126 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-07 04:02:47.741130 | orchestrator | Tuesday 07 April 2026 04:00:03 +0000 (0:00:04.962) 0:00:43.074 ********* 2026-04-07 04:02:47.741134 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741138 | orchestrator | 2026-04-07 04:02:47.741142 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-07 04:02:47.741146 | orchestrator | Tuesday 07 April 2026 04:00:08 +0000 (0:00:05.428) 0:00:48.502 ********* 2026-04-07 04:02:47.741150 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741153 | orchestrator | 2026-04-07 04:02:47.741157 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-07 04:02:47.741161 | orchestrator | Tuesday 07 April 2026 04:00:13 +0000 (0:00:04.863) 0:00:53.365 ********* 2026-04-07 04:02:47.741165 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741169 | orchestrator | 2026-04-07 04:02:47.741173 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-07 04:02:47.741177 | orchestrator | Tuesday 07 April 2026 04:00:17 +0000 (0:00:04.443) 0:00:57.809 ********* 2026-04-07 04:02:47.741181 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741184 | orchestrator | 2026-04-07 04:02:47.741188 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-07 04:02:47.741192 | orchestrator | Tuesday 07 April 2026 04:00:22 +0000 (0:00:04.763) 0:01:02.572 ********* 2026-04-07 04:02:47.741196 | orchestrator | changed: [localhost] 2026-04-07 04:02:47.741200 | orchestrator | 2026-04-07 04:02:47.741204 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-07 04:02:47.741208 | orchestrator | Tuesday 07 April 2026 04:00:27 +0000 (0:00:04.932) 0:01:07.505 ********* 2026-04-07 04:02:47.741212 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-07 04:02:47.741216 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-07 04:02:47.741220 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-07 04:02:47.741224 | orchestrator | 2026-04-07 04:02:47.741228 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-07 04:02:47.741236 | orchestrator | Tuesday 07 April 2026 04:00:42 +0000 (0:00:14.863) 0:01:22.368 ********* 2026-04-07 04:02:47.741241 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-07 04:02:47.741246 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-07 04:02:47.741249 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-07 04:02:47.741253 | orchestrator | 2026-04-07 04:02:47.741257 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-07 04:02:47.741261 | orchestrator | Tuesday 07 April 2026 04:01:00 +0000 (0:00:17.834) 0:01:40.203 ********* 2026-04-07 04:02:47.741265 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-07 04:02:47.741271 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-07 04:02:47.741275 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-07 04:02:47.741279 | orchestrator | 2026-04-07 04:02:47.741283 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-07 04:02:47.741287 | orchestrator | 2026-04-07 04:02:47.741291 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-07 04:02:47.741305 | orchestrator | Tuesday 07 April 2026 04:01:34 +0000 (0:00:34.152) 0:02:14.356 ********* 2026-04-07 04:02:47.741309 | orchestrator | ok: [localhost] 2026-04-07 04:02:47.741313 | orchestrator | 2026-04-07 04:02:47.741317 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-07 04:02:47.741321 | orchestrator | Tuesday 07 April 2026 04:01:38 +0000 (0:00:03.955) 0:02:18.311 ********* 2026-04-07 04:02:47.741325 | orchestrator | skipping: [localhost] 2026-04-07 04:02:47.741329 | orchestrator | 2026-04-07 04:02:47.741333 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-07 04:02:47.741337 | orchestrator | Tuesday 07 April 2026 04:01:38 +0000 (0:00:00.067) 0:02:18.378 ********* 2026-04-07 04:02:47.741341 | orchestrator | skipping: [localhost] 2026-04-07 04:02:47.741344 | orchestrator | 2026-04-07 04:02:47.741348 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-07 04:02:47.741352 | orchestrator | Tuesday 07 April 2026 04:01:38 +0000 (0:00:00.048) 0:02:18.427 ********* 2026-04-07 04:02:47.741356 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-07 04:02:47.741360 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-07 04:02:47.741364 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-07 04:02:47.741368 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-07 04:02:47.741371 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-07 04:02:47.741375 | orchestrator | skipping: [localhost] 2026-04-07 04:02:47.741379 | orchestrator | 2026-04-07 04:02:47.741383 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-07 04:02:47.741387 | orchestrator | Tuesday 07 April 2026 04:01:38 +0000 (0:00:00.184) 0:02:18.612 ********* 2026-04-07 04:02:47.741391 | orchestrator | skipping: [localhost] 2026-04-07 04:02:47.741395 | orchestrator | 2026-04-07 04:02:47.741398 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-07 04:02:47.741402 | orchestrator | Tuesday 07 April 2026 04:01:38 +0000 (0:00:00.165) 0:02:18.777 ********* 2026-04-07 04:02:47.741406 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-07 04:02:47.741411 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-07 04:02:47.741416 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-07 04:02:47.741421 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-07 04:02:47.741429 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-07 04:02:47.741433 | orchestrator | 2026-04-07 04:02:47.741438 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-07 04:02:47.741442 | orchestrator | Tuesday 07 April 2026 04:01:43 +0000 (0:00:05.038) 0:02:23.816 ********* 2026-04-07 04:02:47.741447 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-07 04:02:47.741453 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-07 04:02:47.741457 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-07 04:02:47.741462 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-07 04:02:47.741471 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j813474622237.3816', 'results_file': '/ansible/.ansible_async/j813474622237.3816', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:02:47.741480 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j137610688281.3841', 'results_file': '/ansible/.ansible_async/j137610688281.3841', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:02:47.741486 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-07 04:02:47.741492 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j131223256736.3866', 'results_file': '/ansible/.ansible_async/j131223256736.3866', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:02:47.741498 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j284080647059.3891', 'results_file': '/ansible/.ansible_async/j284080647059.3891', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:02:47.741507 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j552993310742.3916', 'results_file': '/ansible/.ansible_async/j552993310742.3916', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:02:47.741514 | orchestrator | 2026-04-07 04:02:47.741520 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-07 04:02:47.741526 | orchestrator | Tuesday 07 April 2026 04:02:42 +0000 (0:00:58.642) 0:03:22.459 ********* 2026-04-07 04:02:47.741536 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-07 04:04:01.465770 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-07 04:04:01.465879 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-07 04:04:01.465889 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-07 04:04:01.465973 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-07 04:04:01.465981 | orchestrator | 2026-04-07 04:04:01.465990 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-07 04:04:01.465997 | orchestrator | Tuesday 07 April 2026 04:02:47 +0000 (0:00:05.085) 0:03:27.544 ********* 2026-04-07 04:04:01.466005 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-07 04:04:01.466069 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j801192737865.4027', 'results_file': '/ansible/.ansible_async/j801192737865.4027', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466080 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j799194121969.4052', 'results_file': '/ansible/.ansible_async/j799194121969.4052', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466109 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j859817846609.4077', 'results_file': '/ansible/.ansible_async/j859817846609.4077', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466117 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j690926003308.4102', 'results_file': '/ansible/.ansible_async/j690926003308.4102', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466125 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j840167658970.4127', 'results_file': '/ansible/.ansible_async/j840167658970.4127', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466131 | orchestrator | 2026-04-07 04:04:01.466139 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-07 04:04:01.466146 | orchestrator | Tuesday 07 April 2026 04:02:57 +0000 (0:00:09.933) 0:03:37.477 ********* 2026-04-07 04:04:01.466153 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-07 04:04:01.466161 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-07 04:04:01.466168 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-07 04:04:01.466176 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-07 04:04:01.466183 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-07 04:04:01.466191 | orchestrator | 2026-04-07 04:04:01.466199 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-07 04:04:01.466207 | orchestrator | Tuesday 07 April 2026 04:03:02 +0000 (0:00:05.078) 0:03:42.556 ********* 2026-04-07 04:04:01.466214 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-07 04:04:01.466222 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j326707811446.4203', 'results_file': '/ansible/.ansible_async/j326707811446.4203', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466229 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j708101544826.4228', 'results_file': '/ansible/.ansible_async/j708101544826.4228', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466237 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j793667572616.4254', 'results_file': '/ansible/.ansible_async/j793667572616.4254', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466257 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j617402986862.4280', 'results_file': '/ansible/.ansible_async/j617402986862.4280', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466283 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j912244223338.4306', 'results_file': '/ansible/.ansible_async/j912244223338.4306', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-07 04:04:01.466290 | orchestrator | 2026-04-07 04:04:01.466298 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-07 04:04:01.466306 | orchestrator | Tuesday 07 April 2026 04:03:12 +0000 (0:00:10.203) 0:03:52.760 ********* 2026-04-07 04:04:01.466322 | orchestrator | changed: [localhost] 2026-04-07 04:04:01.466330 | orchestrator | 2026-04-07 04:04:01.466337 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-07 04:04:01.466345 | orchestrator | Tuesday 07 April 2026 04:03:19 +0000 (0:00:06.684) 0:03:59.445 ********* 2026-04-07 04:04:01.466353 | orchestrator | changed: [localhost] 2026-04-07 04:04:01.466361 | orchestrator | 2026-04-07 04:04:01.466369 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-07 04:04:01.466377 | orchestrator | Tuesday 07 April 2026 04:03:33 +0000 (0:00:14.351) 0:04:13.796 ********* 2026-04-07 04:04:01.466385 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-07 04:04:01.466394 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-07 04:04:01.466400 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-07 04:04:01.466408 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-07 04:04:01.466415 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-07 04:04:01.466422 | orchestrator | 2026-04-07 04:04:01.466429 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-07 04:04:01.466437 | orchestrator | Tuesday 07 April 2026 04:04:01 +0000 (0:00:27.056) 0:04:40.853 ********* 2026-04-07 04:04:01.466444 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-07 04:04:01.466451 | orchestrator |  "msg": "test: 192.168.112.173" 2026-04-07 04:04:01.466458 | orchestrator | } 2026-04-07 04:04:01.466466 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-07 04:04:01.466474 | orchestrator |  "msg": "test-1: 192.168.112.126" 2026-04-07 04:04:01.466481 | orchestrator | } 2026-04-07 04:04:01.466489 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-07 04:04:01.466496 | orchestrator |  "msg": "test-2: 192.168.112.149" 2026-04-07 04:04:01.466503 | orchestrator | } 2026-04-07 04:04:01.466510 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-07 04:04:01.466517 | orchestrator |  "msg": "test-3: 192.168.112.107" 2026-04-07 04:04:01.466524 | orchestrator | } 2026-04-07 04:04:01.466531 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-07 04:04:01.466538 | orchestrator |  "msg": "test-4: 192.168.112.116" 2026-04-07 04:04:01.466545 | orchestrator | } 2026-04-07 04:04:01.466553 | orchestrator | 2026-04-07 04:04:01.466560 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:04:01.466567 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 04:04:01.466576 | orchestrator | 2026-04-07 04:04:01.466583 | orchestrator | 2026-04-07 04:04:01.466590 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:04:01.466597 | orchestrator | Tuesday 07 April 2026 04:04:01 +0000 (0:00:00.124) 0:04:40.978 ********* 2026-04-07 04:04:01.466604 | orchestrator | =============================================================================== 2026-04-07 04:04:01.466611 | orchestrator | Wait for instance creation to complete --------------------------------- 58.64s 2026-04-07 04:04:01.466619 | orchestrator | Create test routers ---------------------------------------------------- 34.15s 2026-04-07 04:04:01.466625 | orchestrator | Create floating ip addresses ------------------------------------------- 27.06s 2026-04-07 04:04:01.466633 | orchestrator | Create test subnets ---------------------------------------------------- 17.83s 2026-04-07 04:04:01.466640 | orchestrator | Create test networks --------------------------------------------------- 14.86s 2026-04-07 04:04:01.466647 | orchestrator | Attach test volume ----------------------------------------------------- 14.35s 2026-04-07 04:04:01.466655 | orchestrator | Add member roles to user test ------------------------------------------ 12.66s 2026-04-07 04:04:01.466662 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.20s 2026-04-07 04:04:01.466669 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.93s 2026-04-07 04:04:01.466683 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.51s 2026-04-07 04:04:01.466690 | orchestrator | Create test volume ------------------------------------------------------ 6.68s 2026-04-07 04:04:01.466698 | orchestrator | Create ssh security group ----------------------------------------------- 5.43s 2026-04-07 04:04:01.466705 | orchestrator | Add metadata to instances ----------------------------------------------- 5.09s 2026-04-07 04:04:01.466712 | orchestrator | Add tag to instances ---------------------------------------------------- 5.08s 2026-04-07 04:04:01.466719 | orchestrator | Create test instances --------------------------------------------------- 5.04s 2026-04-07 04:04:01.466727 | orchestrator | Create test server group ------------------------------------------------ 4.96s 2026-04-07 04:04:01.466734 | orchestrator | Create test keypair ----------------------------------------------------- 4.93s 2026-04-07 04:04:01.466741 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.86s 2026-04-07 04:04:01.466753 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.76s 2026-04-07 04:04:01.466761 | orchestrator | Create test user -------------------------------------------------------- 4.65s 2026-04-07 04:04:01.874420 | orchestrator | + server_list 2026-04-07 04:04:01.874547 | orchestrator | + openstack --os-cloud test server list 2026-04-07 04:04:05.862291 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-07 04:04:05.862402 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-07 04:04:05.862413 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-07 04:04:05.862420 | orchestrator | | 6c9c7675-e43f-4860-a7fb-d6c00b77a9bf | test-3 | ACTIVE | test-2=192.168.112.107, 192.168.201.35 | N/A (booted from volume) | SCS-1L-1 | 2026-04-07 04:04:05.862428 | orchestrator | | ad3ba69f-c0e7-47ca-a356-74fcb6613926 | test-4 | ACTIVE | test-3=192.168.112.116, 192.168.202.224 | N/A (booted from volume) | SCS-1L-1 | 2026-04-07 04:04:05.862435 | orchestrator | | 2e7dd70d-9d4d-4deb-a72e-d0168dfbe7d5 | test-2 | ACTIVE | test-2=192.168.112.149, 192.168.201.4 | N/A (booted from volume) | SCS-1L-1 | 2026-04-07 04:04:05.862442 | orchestrator | | c7bb7271-c225-41ba-b7f0-541e72c1f9a5 | test-1 | ACTIVE | test-1=192.168.112.126, 192.168.200.131 | N/A (booted from volume) | SCS-1L-1 | 2026-04-07 04:04:05.862450 | orchestrator | | fbe9b3ba-2ada-4ddd-becb-6507e3bca07b | test | ACTIVE | test-1=192.168.112.173, 192.168.200.86 | N/A (booted from volume) | SCS-1L-1 | 2026-04-07 04:04:05.862469 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-07 04:04:06.207336 | orchestrator | + openstack --os-cloud test server show test 2026-04-07 04:04:09.877184 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:09.877310 | orchestrator | | Field | Value | 2026-04-07 04:04:09.877328 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:09.877359 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-07 04:04:09.877372 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-07 04:04:09.877384 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-07 04:04:09.877396 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-07 04:04:09.877408 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-07 04:04:09.877420 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-07 04:04:09.877448 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-07 04:04:09.877461 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-07 04:04:09.877472 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-07 04:04:09.877500 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-07 04:04:09.877512 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-07 04:04:09.877532 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-07 04:04:09.877576 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-07 04:04:09.877592 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-07 04:04:09.877605 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-07 04:04:09.877617 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-07T04:02:16.000000 | 2026-04-07 04:04:09.877636 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-07 04:04:09.877648 | orchestrator | | accessIPv4 | | 2026-04-07 04:04:09.877660 | orchestrator | | accessIPv6 | | 2026-04-07 04:04:09.877678 | orchestrator | | addresses | test-1=192.168.112.173, 192.168.200.86 | 2026-04-07 04:04:09.877690 | orchestrator | | config_drive | | 2026-04-07 04:04:09.877705 | orchestrator | | created | 2026-04-07T04:01:49Z | 2026-04-07 04:04:09.877721 | orchestrator | | description | None | 2026-04-07 04:04:09.877740 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-07 04:04:09.877756 | orchestrator | | hostId | 1e6743a8d5f1ddbd647c5fb191f32dfb168e2fb03eb28b9a77e312bc | 2026-04-07 04:04:09.877770 | orchestrator | | host_status | None | 2026-04-07 04:04:09.877791 | orchestrator | | id | fbe9b3ba-2ada-4ddd-becb-6507e3bca07b | 2026-04-07 04:04:09.877806 | orchestrator | | image | N/A (booted from volume) | 2026-04-07 04:04:09.877826 | orchestrator | | key_name | test | 2026-04-07 04:04:09.877840 | orchestrator | | locked | False | 2026-04-07 04:04:09.877854 | orchestrator | | locked_reason | None | 2026-04-07 04:04:09.877868 | orchestrator | | name | test | 2026-04-07 04:04:09.877997 | orchestrator | | pinned_availability_zone | None | 2026-04-07 04:04:09.878082 | orchestrator | | progress | 0 | 2026-04-07 04:04:09.878099 | orchestrator | | project_id | 9c3238792d434c57a542995a15ca34a3 | 2026-04-07 04:04:09.878113 | orchestrator | | properties | hostname='test' | 2026-04-07 04:04:09.878134 | orchestrator | | security_groups | name='icmp' | 2026-04-07 04:04:09.878156 | orchestrator | | | name='ssh' | 2026-04-07 04:04:09.878168 | orchestrator | | server_groups | None | 2026-04-07 04:04:09.878179 | orchestrator | | status | ACTIVE | 2026-04-07 04:04:09.878191 | orchestrator | | tags | test | 2026-04-07 04:04:09.878202 | orchestrator | | trusted_image_certificates | None | 2026-04-07 04:04:09.878218 | orchestrator | | updated | 2026-04-07T04:02:49Z | 2026-04-07 04:04:09.878250 | orchestrator | | user_id | 12daf045d4fe4e54bc11527925bc3656 | 2026-04-07 04:04:09.878262 | orchestrator | | volumes_attached | delete_on_termination='True', id='1d087a65-ee8e-4a61-8f8c-cacd08329360' | 2026-04-07 04:04:09.878273 | orchestrator | | | delete_on_termination='False', id='0c370d6d-b8a2-44a0-bd6c-628a84012944' | 2026-04-07 04:04:09.881058 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:10.219371 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-07 04:04:13.427754 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:13.427869 | orchestrator | | Field | Value | 2026-04-07 04:04:13.427918 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:13.427935 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-07 04:04:13.427950 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-07 04:04:13.427976 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-07 04:04:13.427993 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-07 04:04:13.428008 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-07 04:04:13.428046 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-07 04:04:13.428084 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-07 04:04:13.428101 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-07 04:04:13.428116 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-07 04:04:13.428132 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-07 04:04:13.428146 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-07 04:04:13.428162 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-07 04:04:13.428176 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-07 04:04:13.428191 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-07 04:04:13.428205 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-07 04:04:13.428231 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-07T04:02:16.000000 | 2026-04-07 04:04:13.428255 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-07 04:04:13.428272 | orchestrator | | accessIPv4 | | 2026-04-07 04:04:13.428290 | orchestrator | | accessIPv6 | | 2026-04-07 04:04:13.428307 | orchestrator | | addresses | test-1=192.168.112.126, 192.168.200.131 | 2026-04-07 04:04:13.428323 | orchestrator | | config_drive | | 2026-04-07 04:04:13.428350 | orchestrator | | created | 2026-04-07T04:01:49Z | 2026-04-07 04:04:13.428366 | orchestrator | | description | None | 2026-04-07 04:04:13.428377 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-07 04:04:13.428394 | orchestrator | | hostId | 1e6743a8d5f1ddbd647c5fb191f32dfb168e2fb03eb28b9a77e312bc | 2026-04-07 04:04:13.428406 | orchestrator | | host_status | None | 2026-04-07 04:04:13.428425 | orchestrator | | id | c7bb7271-c225-41ba-b7f0-541e72c1f9a5 | 2026-04-07 04:04:13.428437 | orchestrator | | image | N/A (booted from volume) | 2026-04-07 04:04:13.428448 | orchestrator | | key_name | test | 2026-04-07 04:04:13.428459 | orchestrator | | locked | False | 2026-04-07 04:04:13.428470 | orchestrator | | locked_reason | None | 2026-04-07 04:04:13.428481 | orchestrator | | name | test-1 | 2026-04-07 04:04:13.428497 | orchestrator | | pinned_availability_zone | None | 2026-04-07 04:04:13.428514 | orchestrator | | progress | 0 | 2026-04-07 04:04:13.428525 | orchestrator | | project_id | 9c3238792d434c57a542995a15ca34a3 | 2026-04-07 04:04:13.428536 | orchestrator | | properties | hostname='test-1' | 2026-04-07 04:04:13.428560 | orchestrator | | security_groups | name='icmp' | 2026-04-07 04:04:13.428575 | orchestrator | | | name='ssh' | 2026-04-07 04:04:13.428591 | orchestrator | | server_groups | None | 2026-04-07 04:04:13.428605 | orchestrator | | status | ACTIVE | 2026-04-07 04:04:13.428619 | orchestrator | | tags | test | 2026-04-07 04:04:13.428634 | orchestrator | | trusted_image_certificates | None | 2026-04-07 04:04:13.428665 | orchestrator | | updated | 2026-04-07T04:02:49Z | 2026-04-07 04:04:13.428680 | orchestrator | | user_id | 12daf045d4fe4e54bc11527925bc3656 | 2026-04-07 04:04:13.428694 | orchestrator | | volumes_attached | delete_on_termination='True', id='0b0ff012-f580-49ad-8e4c-d6a90dfd0809' | 2026-04-07 04:04:13.431734 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:13.755081 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-07 04:04:17.128740 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:17.128869 | orchestrator | | Field | Value | 2026-04-07 04:04:17.129014 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:17.129034 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-07 04:04:17.129050 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-07 04:04:17.129099 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-07 04:04:17.129136 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-07 04:04:17.129155 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-07 04:04:17.129173 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-07 04:04:17.129212 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-07 04:04:17.129231 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-07 04:04:17.129249 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-07 04:04:17.129266 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-07 04:04:17.129283 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-07 04:04:17.129300 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-07 04:04:17.129340 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-07 04:04:17.129358 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-07 04:04:17.129377 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-07 04:04:17.129394 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-07T04:02:17.000000 | 2026-04-07 04:04:17.129423 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-07 04:04:17.129442 | orchestrator | | accessIPv4 | | 2026-04-07 04:04:17.129461 | orchestrator | | accessIPv6 | | 2026-04-07 04:04:17.129479 | orchestrator | | addresses | test-2=192.168.112.149, 192.168.201.4 | 2026-04-07 04:04:17.129497 | orchestrator | | config_drive | | 2026-04-07 04:04:17.129528 | orchestrator | | created | 2026-04-07T04:01:50Z | 2026-04-07 04:04:17.129551 | orchestrator | | description | None | 2026-04-07 04:04:17.129566 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-07 04:04:17.129581 | orchestrator | | hostId | 28147c582880c43f93bd91d2edee1adc8a222d170f53c4cc239e44d8 | 2026-04-07 04:04:17.129597 | orchestrator | | host_status | None | 2026-04-07 04:04:17.129625 | orchestrator | | id | 2e7dd70d-9d4d-4deb-a72e-d0168dfbe7d5 | 2026-04-07 04:04:17.129641 | orchestrator | | image | N/A (booted from volume) | 2026-04-07 04:04:17.129657 | orchestrator | | key_name | test | 2026-04-07 04:04:17.129673 | orchestrator | | locked | False | 2026-04-07 04:04:17.129698 | orchestrator | | locked_reason | None | 2026-04-07 04:04:17.129714 | orchestrator | | name | test-2 | 2026-04-07 04:04:17.129730 | orchestrator | | pinned_availability_zone | None | 2026-04-07 04:04:17.129747 | orchestrator | | progress | 0 | 2026-04-07 04:04:17.129765 | orchestrator | | project_id | 9c3238792d434c57a542995a15ca34a3 | 2026-04-07 04:04:17.129782 | orchestrator | | properties | hostname='test-2' | 2026-04-07 04:04:17.129810 | orchestrator | | security_groups | name='icmp' | 2026-04-07 04:04:17.129829 | orchestrator | | | name='ssh' | 2026-04-07 04:04:17.129845 | orchestrator | | server_groups | None | 2026-04-07 04:04:17.130423 | orchestrator | | status | ACTIVE | 2026-04-07 04:04:17.130461 | orchestrator | | tags | test | 2026-04-07 04:04:17.130472 | orchestrator | | trusted_image_certificates | None | 2026-04-07 04:04:17.130482 | orchestrator | | updated | 2026-04-07T04:02:50Z | 2026-04-07 04:04:17.130493 | orchestrator | | user_id | 12daf045d4fe4e54bc11527925bc3656 | 2026-04-07 04:04:17.130503 | orchestrator | | volumes_attached | delete_on_termination='True', id='63880d47-185d-41a5-acb5-5d9c8cd3b23f' | 2026-04-07 04:04:17.133992 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:17.457492 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-07 04:04:20.614502 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:20.614610 | orchestrator | | Field | Value | 2026-04-07 04:04:20.614649 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:20.614676 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-07 04:04:20.614687 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-07 04:04:20.614697 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-07 04:04:20.614707 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-07 04:04:20.614718 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-07 04:04:20.614728 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-07 04:04:20.614756 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-07 04:04:20.614767 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-07 04:04:20.614777 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-07 04:04:20.614794 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-07 04:04:20.614809 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-07 04:04:20.614820 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-07 04:04:20.614830 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-07 04:04:20.614840 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-07 04:04:20.614850 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-07 04:04:20.614861 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-07T04:02:17.000000 | 2026-04-07 04:04:20.614917 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-07 04:04:20.614937 | orchestrator | | accessIPv4 | | 2026-04-07 04:04:20.614965 | orchestrator | | accessIPv6 | | 2026-04-07 04:04:20.614982 | orchestrator | | addresses | test-2=192.168.112.107, 192.168.201.35 | 2026-04-07 04:04:20.615006 | orchestrator | | config_drive | | 2026-04-07 04:04:20.615025 | orchestrator | | created | 2026-04-07T04:01:54Z | 2026-04-07 04:04:20.615042 | orchestrator | | description | None | 2026-04-07 04:04:20.615059 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-07 04:04:20.615073 | orchestrator | | hostId | 28147c582880c43f93bd91d2edee1adc8a222d170f53c4cc239e44d8 | 2026-04-07 04:04:20.615083 | orchestrator | | host_status | None | 2026-04-07 04:04:20.615102 | orchestrator | | id | 6c9c7675-e43f-4860-a7fb-d6c00b77a9bf | 2026-04-07 04:04:20.615119 | orchestrator | | image | N/A (booted from volume) | 2026-04-07 04:04:20.615130 | orchestrator | | key_name | test | 2026-04-07 04:04:20.615140 | orchestrator | | locked | False | 2026-04-07 04:04:20.615155 | orchestrator | | locked_reason | None | 2026-04-07 04:04:20.615166 | orchestrator | | name | test-3 | 2026-04-07 04:04:20.615176 | orchestrator | | pinned_availability_zone | None | 2026-04-07 04:04:20.615187 | orchestrator | | progress | 0 | 2026-04-07 04:04:20.615197 | orchestrator | | project_id | 9c3238792d434c57a542995a15ca34a3 | 2026-04-07 04:04:20.615210 | orchestrator | | properties | hostname='test-3' | 2026-04-07 04:04:20.615244 | orchestrator | | security_groups | name='icmp' | 2026-04-07 04:04:20.615262 | orchestrator | | | name='ssh' | 2026-04-07 04:04:20.615280 | orchestrator | | server_groups | None | 2026-04-07 04:04:20.615297 | orchestrator | | status | ACTIVE | 2026-04-07 04:04:20.615321 | orchestrator | | tags | test | 2026-04-07 04:04:20.615332 | orchestrator | | trusted_image_certificates | None | 2026-04-07 04:04:20.615342 | orchestrator | | updated | 2026-04-07T04:02:51Z | 2026-04-07 04:04:20.615352 | orchestrator | | user_id | 12daf045d4fe4e54bc11527925bc3656 | 2026-04-07 04:04:20.615362 | orchestrator | | volumes_attached | delete_on_termination='True', id='4818f276-288b-478f-8fe6-7b0c54ade156' | 2026-04-07 04:04:20.619074 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:20.943159 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-07 04:04:24.265858 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:24.266215 | orchestrator | | Field | Value | 2026-04-07 04:04:24.266250 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:24.266289 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-07 04:04:24.266310 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-07 04:04:24.266330 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-07 04:04:24.266351 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-07 04:04:24.266371 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-07 04:04:24.266393 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-07 04:04:24.266472 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-07 04:04:24.266501 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-07 04:04:24.266523 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-07 04:04:24.266543 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-07 04:04:24.266563 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-07 04:04:24.266584 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-07 04:04:24.266605 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-07 04:04:24.266625 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-07 04:04:24.266645 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-07 04:04:24.266679 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-07T04:02:17.000000 | 2026-04-07 04:04:24.266709 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-07 04:04:24.266832 | orchestrator | | accessIPv4 | | 2026-04-07 04:04:24.266864 | orchestrator | | accessIPv6 | | 2026-04-07 04:04:24.266914 | orchestrator | | addresses | test-3=192.168.112.116, 192.168.202.224 | 2026-04-07 04:04:24.266943 | orchestrator | | config_drive | | 2026-04-07 04:04:24.266965 | orchestrator | | created | 2026-04-07T04:01:52Z | 2026-04-07 04:04:24.266986 | orchestrator | | description | None | 2026-04-07 04:04:24.267006 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-07 04:04:24.267036 | orchestrator | | hostId | 28147c582880c43f93bd91d2edee1adc8a222d170f53c4cc239e44d8 | 2026-04-07 04:04:24.267057 | orchestrator | | host_status | None | 2026-04-07 04:04:24.267092 | orchestrator | | id | ad3ba69f-c0e7-47ca-a356-74fcb6613926 | 2026-04-07 04:04:24.267112 | orchestrator | | image | N/A (booted from volume) | 2026-04-07 04:04:24.267130 | orchestrator | | key_name | test | 2026-04-07 04:04:24.267150 | orchestrator | | locked | False | 2026-04-07 04:04:24.267177 | orchestrator | | locked_reason | None | 2026-04-07 04:04:24.267198 | orchestrator | | name | test-4 | 2026-04-07 04:04:24.267218 | orchestrator | | pinned_availability_zone | None | 2026-04-07 04:04:24.267248 | orchestrator | | progress | 0 | 2026-04-07 04:04:24.267268 | orchestrator | | project_id | 9c3238792d434c57a542995a15ca34a3 | 2026-04-07 04:04:24.267289 | orchestrator | | properties | hostname='test-4' | 2026-04-07 04:04:24.267321 | orchestrator | | security_groups | name='icmp' | 2026-04-07 04:04:24.267342 | orchestrator | | | name='ssh' | 2026-04-07 04:04:24.267362 | orchestrator | | server_groups | None | 2026-04-07 04:04:24.267383 | orchestrator | | status | ACTIVE | 2026-04-07 04:04:24.267413 | orchestrator | | tags | test | 2026-04-07 04:04:24.267434 | orchestrator | | trusted_image_certificates | None | 2026-04-07 04:04:24.267454 | orchestrator | | updated | 2026-04-07T04:02:52Z | 2026-04-07 04:04:24.267484 | orchestrator | | user_id | 12daf045d4fe4e54bc11527925bc3656 | 2026-04-07 04:04:24.267504 | orchestrator | | volumes_attached | delete_on_termination='True', id='22bd2d36-dab7-4a56-9315-c98ead345662' | 2026-04-07 04:04:24.270495 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-07 04:04:24.623085 | orchestrator | + server_ping 2026-04-07 04:04:24.624399 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-07 04:04:24.624440 | orchestrator | ++ tr -d '\r' 2026-04-07 04:04:27.695443 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-07 04:04:27.695520 | orchestrator | + ping -c3 192.168.112.126 2026-04-07 04:04:27.718090 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2026-04-07 04:04:27.718181 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=10.8 ms 2026-04-07 04:04:28.711605 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=3.06 ms 2026-04-07 04:04:29.711978 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=2.06 ms 2026-04-07 04:04:29.712100 | orchestrator | 2026-04-07 04:04:29.712123 | orchestrator | --- 192.168.112.126 ping statistics --- 2026-04-07 04:04:29.712141 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-07 04:04:29.712157 | orchestrator | rtt min/avg/max/mdev = 2.060/5.311/10.817/3.914 ms 2026-04-07 04:04:29.712534 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-07 04:04:29.712567 | orchestrator | + ping -c3 192.168.112.107 2026-04-07 04:04:29.724518 | orchestrator | PING 192.168.112.107 (192.168.112.107) 56(84) bytes of data. 2026-04-07 04:04:29.724631 | orchestrator | 64 bytes from 192.168.112.107: icmp_seq=1 ttl=63 time=8.31 ms 2026-04-07 04:04:30.720317 | orchestrator | 64 bytes from 192.168.112.107: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-07 04:04:31.723220 | orchestrator | 64 bytes from 192.168.112.107: icmp_seq=3 ttl=63 time=2.57 ms 2026-04-07 04:04:31.723434 | orchestrator | 2026-04-07 04:04:31.723451 | orchestrator | --- 192.168.112.107 ping statistics --- 2026-04-07 04:04:31.723464 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-07 04:04:31.723476 | orchestrator | rtt min/avg/max/mdev = 2.343/4.407/8.313/2.763 ms 2026-04-07 04:04:31.723499 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-07 04:04:31.723510 | orchestrator | + ping -c3 192.168.112.173 2026-04-07 04:04:31.740335 | orchestrator | PING 192.168.112.173 (192.168.112.173) 56(84) bytes of data. 2026-04-07 04:04:31.740435 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=1 ttl=63 time=10.8 ms 2026-04-07 04:04:32.732761 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=2 ttl=63 time=2.43 ms 2026-04-07 04:04:33.734270 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=3 ttl=63 time=2.03 ms 2026-04-07 04:04:33.734437 | orchestrator | 2026-04-07 04:04:33.734465 | orchestrator | --- 192.168.112.173 ping statistics --- 2026-04-07 04:04:33.734486 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-07 04:04:33.734505 | orchestrator | rtt min/avg/max/mdev = 2.032/5.076/10.767/4.026 ms 2026-04-07 04:04:33.735048 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-07 04:04:33.735095 | orchestrator | + ping -c3 192.168.112.149 2026-04-07 04:04:33.748810 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-04-07 04:04:33.748925 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=8.17 ms 2026-04-07 04:04:34.745096 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=2.66 ms 2026-04-07 04:04:35.745976 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=2.34 ms 2026-04-07 04:04:35.746113 | orchestrator | 2026-04-07 04:04:35.746126 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-04-07 04:04:35.746135 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-07 04:04:35.746142 | orchestrator | rtt min/avg/max/mdev = 2.335/4.390/8.172/2.677 ms 2026-04-07 04:04:35.746790 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-07 04:04:35.746819 | orchestrator | + ping -c3 192.168.112.116 2026-04-07 04:04:35.759651 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-07 04:04:35.759724 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=10.4 ms 2026-04-07 04:04:36.754118 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.68 ms 2026-04-07 04:04:37.755410 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.69 ms 2026-04-07 04:04:37.755503 | orchestrator | 2026-04-07 04:04:37.755516 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-07 04:04:37.755527 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-07 04:04:37.755537 | orchestrator | rtt min/avg/max/mdev = 2.677/5.252/10.393/3.635 ms 2026-04-07 04:04:37.757812 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-07 04:04:38.176443 | orchestrator | ok: Runtime: 0:11:02.284078 2026-04-07 04:04:38.223928 | 2026-04-07 04:04:38.224061 | TASK [Run tempest] 2026-04-07 04:04:38.757965 | orchestrator | skipping: Conditional result was False 2026-04-07 04:04:38.776158 | 2026-04-07 04:04:38.776362 | TASK [Check prometheus alert status] 2026-04-07 04:04:39.313360 | orchestrator | skipping: Conditional result was False 2026-04-07 04:04:39.327596 | 2026-04-07 04:04:39.327742 | PLAY [Upgrade testbed] 2026-04-07 04:04:39.339550 | 2026-04-07 04:04:39.339671 | TASK [Print next ceph version] 2026-04-07 04:04:39.428065 | orchestrator | ok 2026-04-07 04:04:39.437637 | 2026-04-07 04:04:39.437788 | TASK [Print next openstack version] 2026-04-07 04:04:39.508596 | orchestrator | ok 2026-04-07 04:04:39.519522 | 2026-04-07 04:04:39.519652 | TASK [Print next manager version] 2026-04-07 04:04:39.589779 | orchestrator | ok 2026-04-07 04:04:39.600574 | 2026-04-07 04:04:39.600749 | TASK [Set cloud fact (Zuul deployment)] 2026-04-07 04:04:39.652885 | orchestrator | ok 2026-04-07 04:04:39.665034 | 2026-04-07 04:04:39.665246 | TASK [Set cloud fact (local deployment)] 2026-04-07 04:04:39.691781 | orchestrator | skipping: Conditional result was False 2026-04-07 04:04:39.709138 | 2026-04-07 04:04:39.709300 | TASK [Fetch manager address] 2026-04-07 04:04:40.013847 | orchestrator | ok 2026-04-07 04:04:40.023744 | 2026-04-07 04:04:40.023885 | TASK [Set manager_host address] 2026-04-07 04:04:40.123928 | orchestrator | ok 2026-04-07 04:04:40.134656 | 2026-04-07 04:04:40.134795 | TASK [Run upgrade] 2026-04-07 04:04:40.854781 | orchestrator | + set -e 2026-04-07 04:04:40.854957 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-07 04:04:40.854977 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-07 04:04:40.854984 | orchestrator | + CEPH_VERSION=reef 2026-04-07 04:04:40.854989 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-07 04:04:40.854993 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-07 04:04:40.854999 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0 reef 2024.2 kolla/release' 2026-04-07 04:04:40.867829 | orchestrator | + set -e 2026-04-07 04:04:40.867934 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 04:04:40.867944 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 04:04:40.867955 | orchestrator | ++ INTERACTIVE=false 2026-04-07 04:04:40.867959 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 04:04:40.867965 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 04:04:40.869142 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-04-07 04:04:40.906696 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-04-07 04:04:40.907575 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-07 04:04:40.950925 | orchestrator | 2026-04-07 04:04:40.951013 | orchestrator | # UPGRADE MANAGER 2026-04-07 04:04:40.951026 | orchestrator | 2026-04-07 04:04:40.951032 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-04-07 04:04:40.951039 | orchestrator | + echo 2026-04-07 04:04:40.951048 | orchestrator | + echo '# UPGRADE MANAGER' 2026-04-07 04:04:40.951054 | orchestrator | + echo 2026-04-07 04:04:40.951059 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-07 04:04:40.951065 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-07 04:04:40.951071 | orchestrator | + CEPH_VERSION=reef 2026-04-07 04:04:40.951076 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-07 04:04:40.951082 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-07 04:04:40.951097 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-07 04:04:40.957446 | orchestrator | + set -e 2026-04-07 04:04:40.957531 | orchestrator | + VERSION=10.0.0 2026-04-07 04:04:40.957542 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-07 04:04:40.965988 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-07 04:04:40.966133 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-07 04:04:40.972047 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-07 04:04:40.976834 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-07 04:04:40.985400 | orchestrator | /opt/configuration ~ 2026-04-07 04:04:40.985488 | orchestrator | + set -e 2026-04-07 04:04:40.985498 | orchestrator | + pushd /opt/configuration 2026-04-07 04:04:40.985506 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 04:04:40.985515 | orchestrator | + source /opt/venv/bin/activate 2026-04-07 04:04:40.986718 | orchestrator | ++ deactivate nondestructive 2026-04-07 04:04:40.986739 | orchestrator | ++ '[' -n '' ']' 2026-04-07 04:04:40.986745 | orchestrator | ++ '[' -n '' ']' 2026-04-07 04:04:40.986751 | orchestrator | ++ hash -r 2026-04-07 04:04:40.986757 | orchestrator | ++ '[' -n '' ']' 2026-04-07 04:04:40.986763 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-07 04:04:40.986769 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-07 04:04:40.986775 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-07 04:04:40.986784 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-07 04:04:40.986790 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-07 04:04:40.986796 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-07 04:04:40.986802 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-07 04:04:40.986808 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 04:04:40.986815 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 04:04:40.986821 | orchestrator | ++ export PATH 2026-04-07 04:04:40.986827 | orchestrator | ++ '[' -n '' ']' 2026-04-07 04:04:40.986833 | orchestrator | ++ '[' -z '' ']' 2026-04-07 04:04:40.986839 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-07 04:04:40.986845 | orchestrator | ++ PS1='(venv) ' 2026-04-07 04:04:40.986851 | orchestrator | ++ export PS1 2026-04-07 04:04:40.986857 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-07 04:04:40.986881 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-07 04:04:40.986888 | orchestrator | ++ hash -r 2026-04-07 04:04:40.986897 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-07 04:04:42.375308 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-07 04:04:42.377147 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-07 04:04:42.379617 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-07 04:04:42.381018 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-07 04:04:42.382528 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-07 04:04:42.393258 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-07 04:04:42.394944 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-07 04:04:42.396109 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-07 04:04:42.397513 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-07 04:04:42.436296 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-07 04:04:42.438191 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-07 04:04:42.440094 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-07 04:04:42.441475 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-07 04:04:42.446272 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-07 04:04:42.735256 | orchestrator | ++ which gilt 2026-04-07 04:04:42.737100 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-07 04:04:42.737192 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-07 04:04:43.043433 | orchestrator | osism.cfg-generics: 2026-04-07 04:04:43.139807 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-07 04:04:43.140929 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-07 04:04:43.142539 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-07 04:04:43.142574 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-07 04:04:44.365844 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-07 04:04:44.375604 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-07 04:04:44.844248 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-07 04:04:44.908150 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 04:04:44.908223 | orchestrator | + deactivate 2026-04-07 04:04:44.908235 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-07 04:04:44.908247 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 04:04:44.908255 | orchestrator | + export PATH 2026-04-07 04:04:44.908266 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-07 04:04:44.908277 | orchestrator | + '[' -n '' ']' 2026-04-07 04:04:44.908287 | orchestrator | + hash -r 2026-04-07 04:04:44.908347 | orchestrator | + '[' -n '' ']' 2026-04-07 04:04:44.908354 | orchestrator | + unset VIRTUAL_ENV 2026-04-07 04:04:44.908360 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-07 04:04:44.908365 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-07 04:04:44.908371 | orchestrator | + unset -f deactivate 2026-04-07 04:04:44.908377 | orchestrator | + popd 2026-04-07 04:04:44.908425 | orchestrator | ~ 2026-04-07 04:04:44.910940 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-07 04:04:44.911081 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-07 04:04:44.916748 | orchestrator | + set -e 2026-04-07 04:04:44.916829 | orchestrator | + NAMESPACE=kolla/release 2026-04-07 04:04:44.916841 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-07 04:04:44.926222 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-07 04:04:44.937013 | orchestrator | /opt/configuration ~ 2026-04-07 04:04:44.937083 | orchestrator | + set -e 2026-04-07 04:04:44.937090 | orchestrator | + pushd /opt/configuration 2026-04-07 04:04:44.937095 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 04:04:44.937100 | orchestrator | + source /opt/venv/bin/activate 2026-04-07 04:04:44.937111 | orchestrator | ++ deactivate nondestructive 2026-04-07 04:04:44.937116 | orchestrator | ++ '[' -n '' ']' 2026-04-07 04:04:44.937120 | orchestrator | ++ '[' -n '' ']' 2026-04-07 04:04:44.937124 | orchestrator | ++ hash -r 2026-04-07 04:04:44.937128 | orchestrator | ++ '[' -n '' ']' 2026-04-07 04:04:44.937132 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-07 04:04:44.937135 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-07 04:04:44.937140 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-07 04:04:44.937144 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-07 04:04:44.937148 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-07 04:04:44.937154 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-07 04:04:44.937161 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-07 04:04:44.937169 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 04:04:44.937173 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 04:04:44.937177 | orchestrator | ++ export PATH 2026-04-07 04:04:44.937288 | orchestrator | ++ '[' -n '' ']' 2026-04-07 04:04:44.937295 | orchestrator | ++ '[' -z '' ']' 2026-04-07 04:04:44.937336 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-07 04:04:44.937341 | orchestrator | ++ PS1='(venv) ' 2026-04-07 04:04:44.937363 | orchestrator | ++ export PS1 2026-04-07 04:04:44.937369 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-07 04:04:44.937372 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-07 04:04:44.937376 | orchestrator | ++ hash -r 2026-04-07 04:04:44.937465 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-07 04:04:45.563920 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-07 04:04:45.564949 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-07 04:04:45.566222 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-07 04:04:45.567591 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-07 04:04:45.568634 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-07 04:04:45.579585 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-07 04:04:45.581076 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-07 04:04:45.582062 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-07 04:04:45.583481 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-07 04:04:45.626234 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-07 04:04:45.627688 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-07 04:04:45.629595 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-07 04:04:45.631097 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-07 04:04:45.635208 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-07 04:04:45.924315 | orchestrator | ++ which gilt 2026-04-07 04:04:45.924534 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-07 04:04:45.924557 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-07 04:04:46.113400 | orchestrator | osism.cfg-generics: 2026-04-07 04:04:46.187288 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-07 04:04:46.187359 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-07 04:04:46.187732 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-07 04:04:46.187754 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-07 04:04:46.880763 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-07 04:04:46.900201 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-07 04:04:47.346725 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-07 04:04:47.431441 | orchestrator | ~ 2026-04-07 04:04:47.431529 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-07 04:04:47.431543 | orchestrator | + deactivate 2026-04-07 04:04:47.431552 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-07 04:04:47.431562 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-07 04:04:47.431569 | orchestrator | + export PATH 2026-04-07 04:04:47.431576 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-07 04:04:47.431583 | orchestrator | + '[' -n '' ']' 2026-04-07 04:04:47.431590 | orchestrator | + hash -r 2026-04-07 04:04:47.431597 | orchestrator | + '[' -n '' ']' 2026-04-07 04:04:47.431606 | orchestrator | + unset VIRTUAL_ENV 2026-04-07 04:04:47.431613 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-07 04:04:47.431620 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-07 04:04:47.431626 | orchestrator | + unset -f deactivate 2026-04-07 04:04:47.431634 | orchestrator | + popd 2026-04-07 04:04:47.433201 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-04-07 04:04:47.487106 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-07 04:04:47.488179 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-07 04:04:47.550445 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 04:04:47.550511 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-07 04:04:47.555790 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-07 04:04:47.560590 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-04-07 04:04:47.619095 | orchestrator | ++ '[' -1 -le 0 ']' 2026-04-07 04:04:47.620118 | orchestrator | +++ semver 10.0.0 10.0.0-0 2026-04-07 04:04:47.699317 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-04-07 04:04:47.699403 | orchestrator | ++ echo true 2026-04-07 04:04:47.699414 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-04-07 04:04:47.701378 | orchestrator | +++ semver 2024.2 2024.2 2026-04-07 04:04:47.771270 | orchestrator | ++ '[' 0 -le 0 ']' 2026-04-07 04:04:47.771352 | orchestrator | +++ semver 2024.2 2025.1 2026-04-07 04:04:47.828518 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-04-07 04:04:47.828653 | orchestrator | ++ echo false 2026-04-07 04:04:47.829744 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-04-07 04:04:47.829828 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-07 04:04:47.829845 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-04-07 04:04:47.829914 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-04-07 04:04:47.829926 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-04-07 04:04:47.834503 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-04-07 04:04:47.834565 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-04-07 04:04:47.849370 | orchestrator | export RABBITMQ3TO4=true 2026-04-07 04:04:47.852673 | orchestrator | + osism update manager 2026-04-07 04:04:54.180690 | orchestrator | Collecting uv 2026-04-07 04:04:54.284326 | orchestrator | Downloading uv-0.11.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-04-07 04:04:54.304128 | orchestrator | Downloading uv-0.11.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.6 MB) 2026-04-07 04:04:55.387268 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 25.7 MB/s eta 0:00:00 2026-04-07 04:04:55.464287 | orchestrator | Installing collected packages: uv 2026-04-07 04:04:55.964709 | orchestrator | Successfully installed uv-0.11.3 2026-04-07 04:04:56.789385 | orchestrator | Resolved 11 packages in 425ms 2026-04-07 04:04:56.832672 | orchestrator | Downloading cryptography (4.3MiB) 2026-04-07 04:04:56.833031 | orchestrator | Downloading ansible-core (2.1MiB) 2026-04-07 04:04:56.833080 | orchestrator | Downloading ansible (54.5MiB) 2026-04-07 04:04:56.833170 | orchestrator | Downloading netaddr (2.2MiB) 2026-04-07 04:04:57.213245 | orchestrator | Downloaded netaddr 2026-04-07 04:04:57.320562 | orchestrator | Downloaded cryptography 2026-04-07 04:04:57.431606 | orchestrator | Downloaded ansible-core 2026-04-07 04:05:04.974327 | orchestrator | Downloaded ansible 2026-04-07 04:05:04.974411 | orchestrator | Prepared 11 packages in 8.18s 2026-04-07 04:05:05.652726 | orchestrator | Installed 11 packages in 677ms 2026-04-07 04:05:05.652806 | orchestrator | + ansible==11.11.0 2026-04-07 04:05:05.652817 | orchestrator | + ansible-core==2.18.15 2026-04-07 04:05:05.652825 | orchestrator | + cffi==2.0.0 2026-04-07 04:05:05.652832 | orchestrator | + cryptography==46.0.6 2026-04-07 04:05:05.652874 | orchestrator | + jinja2==3.1.6 2026-04-07 04:05:05.652883 | orchestrator | + markupsafe==3.0.3 2026-04-07 04:05:05.652888 | orchestrator | + netaddr==1.3.0 2026-04-07 04:05:05.652892 | orchestrator | + packaging==26.0 2026-04-07 04:05:05.652895 | orchestrator | + pycparser==3.0 2026-04-07 04:05:05.652899 | orchestrator | + pyyaml==6.0.3 2026-04-07 04:05:05.652906 | orchestrator | + resolvelib==1.0.1 2026-04-07 04:05:06.882599 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-206417wbswom1o/tmp05afyzt8/ansible-collection-services20a9cgpt'... 2026-04-07 04:05:08.474478 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-07 04:05:08.474586 | orchestrator | Already on 'main' 2026-04-07 04:05:09.050890 | orchestrator | Starting galaxy collection install process 2026-04-07 04:05:09.050984 | orchestrator | Process install dependency map 2026-04-07 04:05:09.050995 | orchestrator | Starting collection install process 2026-04-07 04:05:09.051004 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-04-07 04:05:09.051013 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-04-07 04:05:09.051020 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-07 04:05:09.647069 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-2064685r96k4nf/tmpab5es8va/ansible-playbooks-manager3r5wla5i'... 2026-04-07 04:05:10.567104 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-07 04:05:10.567201 | orchestrator | Already on 'main' 2026-04-07 04:05:10.841497 | orchestrator | Starting galaxy collection install process 2026-04-07 04:05:10.841611 | orchestrator | Process install dependency map 2026-04-07 04:05:10.841626 | orchestrator | Starting collection install process 2026-04-07 04:05:10.841637 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-04-07 04:05:10.841733 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-04-07 04:05:10.841744 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-04-07 04:05:11.521985 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-07 04:05:11.522117 | orchestrator | -vvvv to see details 2026-04-07 04:05:12.020545 | orchestrator | 2026-04-07 04:05:12.020639 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-04-07 04:05:12.020653 | orchestrator | 2026-04-07 04:05:12.020707 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-07 04:05:17.403545 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:17.403659 | orchestrator | 2026-04-07 04:05:17.403682 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-07 04:05:17.498964 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 04:05:17.499099 | orchestrator | 2026-04-07 04:05:17.499118 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-07 04:05:19.462143 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:19.462337 | orchestrator | 2026-04-07 04:05:19.462365 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-07 04:05:19.536474 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:19.536564 | orchestrator | 2026-04-07 04:05:19.536579 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-07 04:05:19.600896 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-07 04:05:19.600992 | orchestrator | 2026-04-07 04:05:19.601003 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-07 04:05:24.136891 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-04-07 04:05:24.136964 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-04-07 04:05:24.136970 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-07 04:05:24.136984 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-04-07 04:05:24.136989 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-07 04:05:24.136993 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-07 04:05:24.136997 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-07 04:05:24.137001 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-04-07 04:05:24.137006 | orchestrator | 2026-04-07 04:05:24.137011 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-07 04:05:25.292534 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:25.292624 | orchestrator | 2026-04-07 04:05:25.292638 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-07 04:05:26.231215 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:26.231299 | orchestrator | 2026-04-07 04:05:26.231311 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-07 04:05:26.325328 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-07 04:05:26.325433 | orchestrator | 2026-04-07 04:05:26.325445 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-07 04:05:28.460720 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-04-07 04:05:28.460812 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-04-07 04:05:28.460881 | orchestrator | 2026-04-07 04:05:28.460899 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-07 04:05:29.436082 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:29.436161 | orchestrator | 2026-04-07 04:05:29.436169 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-07 04:05:29.495872 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:05:29.495954 | orchestrator | 2026-04-07 04:05:29.495963 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-07 04:05:29.596966 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-07 04:05:29.597049 | orchestrator | 2026-04-07 04:05:29.597057 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-07 04:05:30.668168 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:30.668267 | orchestrator | 2026-04-07 04:05:30.668283 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-07 04:05:30.729630 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-07 04:05:30.729728 | orchestrator | 2026-04-07 04:05:30.729744 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-07 04:05:32.834448 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-07 04:05:32.834559 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-07 04:05:32.834580 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:32.834595 | orchestrator | 2026-04-07 04:05:32.834609 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-07 04:05:33.810332 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:33.810408 | orchestrator | 2026-04-07 04:05:33.810416 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-07 04:05:33.873110 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:05:33.873200 | orchestrator | 2026-04-07 04:05:33.873213 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-07 04:05:34.002103 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-07 04:05:34.002186 | orchestrator | 2026-04-07 04:05:34.002197 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-07 04:05:34.812980 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:34.813080 | orchestrator | 2026-04-07 04:05:34.813096 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-07 04:05:35.412963 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:35.413053 | orchestrator | 2026-04-07 04:05:35.413093 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-07 04:05:37.446306 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-04-07 04:05:37.446422 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-04-07 04:05:37.446445 | orchestrator | 2026-04-07 04:05:37.446464 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-07 04:05:38.674430 | orchestrator | changed: [testbed-manager] 2026-04-07 04:05:38.675203 | orchestrator | 2026-04-07 04:05:38.675238 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-07 04:05:39.277756 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:39.277919 | orchestrator | 2026-04-07 04:05:39.277936 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-07 04:05:39.815941 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:39.816026 | orchestrator | 2026-04-07 04:05:39.816037 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-07 04:05:39.887131 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:05:39.887216 | orchestrator | 2026-04-07 04:05:39.887226 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-07 04:05:39.981502 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-07 04:05:39.981616 | orchestrator | 2026-04-07 04:05:39.981634 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-07 04:05:40.029273 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:40.029356 | orchestrator | 2026-04-07 04:05:40.029366 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-07 04:05:43.004740 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-04-07 04:05:43.005080 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-04-07 04:05:43.005111 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-04-07 04:05:43.005128 | orchestrator | 2026-04-07 04:05:43.005147 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-07 04:05:44.073258 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:44.073360 | orchestrator | 2026-04-07 04:05:44.073386 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-07 04:05:45.281160 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:45.281254 | orchestrator | 2026-04-07 04:05:45.281265 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-07 04:05:46.336871 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:46.336980 | orchestrator | 2026-04-07 04:05:46.336999 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-07 04:05:46.423253 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-07 04:05:46.423359 | orchestrator | 2026-04-07 04:05:46.423376 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-07 04:05:46.481941 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:46.482078 | orchestrator | 2026-04-07 04:05:46.482094 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-07 04:05:47.542002 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-04-07 04:05:47.542194 | orchestrator | 2026-04-07 04:05:47.542212 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-07 04:05:47.636696 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-07 04:05:47.636856 | orchestrator | 2026-04-07 04:05:47.636886 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-07 04:05:48.734487 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:48.734606 | orchestrator | 2026-04-07 04:05:48.734623 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-07 04:05:49.933893 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:49.934073 | orchestrator | 2026-04-07 04:05:49.934090 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-07 04:05:50.006476 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:05:50.006559 | orchestrator | 2026-04-07 04:05:50.006573 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-07 04:05:50.099010 | orchestrator | ok: [testbed-manager] 2026-04-07 04:05:50.099089 | orchestrator | 2026-04-07 04:05:50.099099 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-07 04:05:51.556019 | orchestrator | changed: [testbed-manager] 2026-04-07 04:05:51.556137 | orchestrator | 2026-04-07 04:05:51.556170 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-07 04:07:05.952409 | orchestrator | changed: [testbed-manager] 2026-04-07 04:07:05.952549 | orchestrator | 2026-04-07 04:07:05.952575 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-07 04:07:07.425961 | orchestrator | ok: [testbed-manager] 2026-04-07 04:07:07.426097 | orchestrator | 2026-04-07 04:07:07.426112 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-07 04:07:07.511128 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:07:07.511228 | orchestrator | 2026-04-07 04:07:07.511244 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-07 04:07:08.505105 | orchestrator | ok: [testbed-manager] 2026-04-07 04:07:08.505189 | orchestrator | 2026-04-07 04:07:08.505200 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-07 04:07:08.577379 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:07:08.577483 | orchestrator | 2026-04-07 04:07:08.577501 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-07 04:07:08.577515 | orchestrator | 2026-04-07 04:07:08.577528 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-07 04:07:24.523547 | orchestrator | changed: [testbed-manager] 2026-04-07 04:07:24.523677 | orchestrator | 2026-04-07 04:07:24.523707 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-07 04:08:24.604651 | orchestrator | Pausing for 60 seconds 2026-04-07 04:08:24.604824 | orchestrator | changed: [testbed-manager] 2026-04-07 04:08:24.604842 | orchestrator | 2026-04-07 04:08:24.604856 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-04-07 04:08:24.662173 | orchestrator | ok: [testbed-manager] 2026-04-07 04:08:24.662267 | orchestrator | 2026-04-07 04:08:24.662282 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-07 04:08:28.266639 | orchestrator | changed: [testbed-manager] 2026-04-07 04:08:28.266753 | orchestrator | 2026-04-07 04:08:28.266768 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-07 04:09:31.051083 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-07 04:09:31.051170 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-07 04:09:31.051177 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-07 04:09:31.051182 | orchestrator | changed: [testbed-manager] 2026-04-07 04:09:31.051188 | orchestrator | 2026-04-07 04:09:31.051193 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-07 04:09:38.290257 | orchestrator | changed: [testbed-manager] 2026-04-07 04:09:38.290353 | orchestrator | 2026-04-07 04:09:38.290366 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-07 04:09:38.397244 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-07 04:09:38.397357 | orchestrator | 2026-04-07 04:09:38.397370 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-07 04:09:38.397376 | orchestrator | 2026-04-07 04:09:38.397381 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-07 04:09:38.469810 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:09:38.469900 | orchestrator | 2026-04-07 04:09:38.469912 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-07 04:09:38.561005 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-07 04:09:38.561126 | orchestrator | 2026-04-07 04:09:38.561150 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-07 04:09:39.762908 | orchestrator | changed: [testbed-manager] 2026-04-07 04:09:39.762991 | orchestrator | 2026-04-07 04:09:39.762998 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-07 04:09:43.359020 | orchestrator | ok: [testbed-manager] 2026-04-07 04:09:43.359099 | orchestrator | 2026-04-07 04:09:43.359106 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-07 04:09:43.460234 | orchestrator | ok: [testbed-manager] => { 2026-04-07 04:09:43.460384 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-07 04:09:43.460410 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-07 04:09:43.460432 | orchestrator | "Checking running containers against expected versions...", 2026-04-07 04:09:43.460453 | orchestrator | "", 2026-04-07 04:09:43.460473 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-07 04:09:43.460487 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-07 04:09:43.460505 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.460537 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-07 04:09:43.460571 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.460618 | orchestrator | "", 2026-04-07 04:09:43.460639 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-07 04:09:43.460658 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-07 04:09:43.460675 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.460694 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-07 04:09:43.460713 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.460732 | orchestrator | "", 2026-04-07 04:09:43.460751 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-07 04:09:43.460770 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-07 04:09:43.460790 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.460810 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-07 04:09:43.460830 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.460851 | orchestrator | "", 2026-04-07 04:09:43.460873 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-07 04:09:43.460894 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-07 04:09:43.460909 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.460923 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-07 04:09:43.460936 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.460949 | orchestrator | "", 2026-04-07 04:09:43.460962 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-07 04:09:43.460975 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-07 04:09:43.460990 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461003 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-07 04:09:43.461016 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461035 | orchestrator | "", 2026-04-07 04:09:43.461063 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-07 04:09:43.461083 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461159 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461184 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461204 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461223 | orchestrator | "", 2026-04-07 04:09:43.461242 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-07 04:09:43.461261 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-07 04:09:43.461289 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461308 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-07 04:09:43.461327 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461348 | orchestrator | "", 2026-04-07 04:09:43.461367 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-07 04:09:43.461385 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-07 04:09:43.461402 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461413 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-07 04:09:43.461424 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461435 | orchestrator | "", 2026-04-07 04:09:43.461446 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-07 04:09:43.461457 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-07 04:09:43.461468 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461478 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-07 04:09:43.461489 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461500 | orchestrator | "", 2026-04-07 04:09:43.461515 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-07 04:09:43.461527 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-07 04:09:43.461538 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461549 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-07 04:09:43.461560 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461571 | orchestrator | "", 2026-04-07 04:09:43.461582 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-07 04:09:43.461657 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461669 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461680 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461691 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461702 | orchestrator | "", 2026-04-07 04:09:43.461713 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-07 04:09:43.461723 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461738 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461757 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461774 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461792 | orchestrator | "", 2026-04-07 04:09:43.461810 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-07 04:09:43.461825 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461844 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461861 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461880 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.461900 | orchestrator | "", 2026-04-07 04:09:43.461919 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-07 04:09:43.461932 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461944 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.461955 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.461991 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.462003 | orchestrator | "", 2026-04-07 04:09:43.462084 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-07 04:09:43.462119 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.462137 | orchestrator | " Enabled: true", 2026-04-07 04:09:43.462170 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-07 04:09:43.462187 | orchestrator | " Status: ✅ MATCH", 2026-04-07 04:09:43.462205 | orchestrator | "", 2026-04-07 04:09:43.462223 | orchestrator | "=== Summary ===", 2026-04-07 04:09:43.462241 | orchestrator | "Errors (version mismatches): 0", 2026-04-07 04:09:43.462260 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-07 04:09:43.462277 | orchestrator | "", 2026-04-07 04:09:43.462296 | orchestrator | "✅ All running containers match expected versions!" 2026-04-07 04:09:43.462313 | orchestrator | ] 2026-04-07 04:09:43.462332 | orchestrator | } 2026-04-07 04:09:43.462350 | orchestrator | 2026-04-07 04:09:43.462369 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-07 04:09:43.519249 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:09:43.519374 | orchestrator | 2026-04-07 04:09:43.519392 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:09:43.519406 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-04-07 04:09:43.519418 | orchestrator | 2026-04-07 04:09:56.867819 | orchestrator | 2026-04-07 04:09:56 | INFO  | Task bd2f7eed-e1a8-4170-8303-30ee29e18439 (sync inventory) is running in background. Output coming soon. 2026-04-07 04:10:32.696717 | orchestrator | 2026-04-07 04:09:58 | INFO  | Starting group_vars file reorganization 2026-04-07 04:10:32.696813 | orchestrator | 2026-04-07 04:09:58 | INFO  | Moved 0 file(s) to their respective directories 2026-04-07 04:10:32.696825 | orchestrator | 2026-04-07 04:09:58 | INFO  | Group_vars file reorganization completed 2026-04-07 04:10:32.696833 | orchestrator | 2026-04-07 04:10:02 | INFO  | Starting variable preparation from inventory 2026-04-07 04:10:32.696840 | orchestrator | 2026-04-07 04:10:05 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-07 04:10:32.696848 | orchestrator | 2026-04-07 04:10:05 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-07 04:10:32.696855 | orchestrator | 2026-04-07 04:10:05 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-07 04:10:32.696862 | orchestrator | 2026-04-07 04:10:05 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-07 04:10:32.696868 | orchestrator | 2026-04-07 04:10:05 | INFO  | Variable preparation completed 2026-04-07 04:10:32.696875 | orchestrator | 2026-04-07 04:10:07 | INFO  | Starting inventory overwrite handling 2026-04-07 04:10:32.696882 | orchestrator | 2026-04-07 04:10:07 | INFO  | Handling group overwrites in 99-overwrite 2026-04-07 04:10:32.696889 | orchestrator | 2026-04-07 04:10:07 | INFO  | Removing group frr:children from 60-generic 2026-04-07 04:10:32.696896 | orchestrator | 2026-04-07 04:10:07 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-07 04:10:32.696902 | orchestrator | 2026-04-07 04:10:07 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-07 04:10:32.696909 | orchestrator | 2026-04-07 04:10:07 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-07 04:10:32.696916 | orchestrator | 2026-04-07 04:10:07 | INFO  | Handling group overwrites in 20-roles 2026-04-07 04:10:32.696923 | orchestrator | 2026-04-07 04:10:07 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-07 04:10:32.696930 | orchestrator | 2026-04-07 04:10:07 | INFO  | Removed 5 group(s) in total 2026-04-07 04:10:32.696937 | orchestrator | 2026-04-07 04:10:07 | INFO  | Inventory overwrite handling completed 2026-04-07 04:10:32.696943 | orchestrator | 2026-04-07 04:10:08 | INFO  | Starting merge of inventory files 2026-04-07 04:10:32.696950 | orchestrator | 2026-04-07 04:10:08 | INFO  | Inventory files merged successfully 2026-04-07 04:10:32.696957 | orchestrator | 2026-04-07 04:10:14 | INFO  | Generating minified hosts file 2026-04-07 04:10:32.696984 | orchestrator | 2026-04-07 04:10:16 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-07 04:10:32.697002 | orchestrator | 2026-04-07 04:10:16 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-07 04:10:32.697009 | orchestrator | 2026-04-07 04:10:18 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-07 04:10:32.697016 | orchestrator | 2026-04-07 04:10:31 | INFO  | Successfully wrote ClusterShell configuration 2026-04-07 04:10:32.965649 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-07 04:10:32.965734 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-07 04:10:32.965744 | orchestrator | + local max_attempts=60 2026-04-07 04:10:32.965752 | orchestrator | + local name=kolla-ansible 2026-04-07 04:10:32.965760 | orchestrator | + local attempt_num=1 2026-04-07 04:10:32.965830 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-07 04:10:32.993571 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 04:10:32.993662 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-07 04:10:32.993675 | orchestrator | + local max_attempts=60 2026-04-07 04:10:32.993687 | orchestrator | + local name=osism-ansible 2026-04-07 04:10:32.993696 | orchestrator | + local attempt_num=1 2026-04-07 04:10:32.993897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-07 04:10:33.028949 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-07 04:10:33.029039 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-07 04:10:33.225289 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-07 04:10:33.225416 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-07 04:10:33.225442 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-07 04:10:33.225461 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-07 04:10:33.225506 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 3 hours ago Up 2 minutes (healthy) 8000/tcp 2026-04-07 04:10:33.225605 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-04-07 04:10:33.225628 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-04-07 04:10:33.225647 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-04-07 04:10:33.225666 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 37 seconds ago 2026-04-07 04:10:33.225685 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 3 hours ago Up 3 minutes (healthy) 3306/tcp 2026-04-07 04:10:33.225696 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-04-07 04:10:33.225737 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 3 hours ago Up 3 minutes (healthy) 6379/tcp 2026-04-07 04:10:33.225749 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-07 04:10:33.225760 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-04-07 04:10:33.225771 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-04-07 04:10:33.225782 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-04-07 04:10:33.231879 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-04-07 04:10:33.231968 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-04-07 04:10:33.231987 | orchestrator | + osism apply facts 2026-04-07 04:10:44.750296 | orchestrator | 2026-04-07 04:10:44 | INFO  | Prepare task for execution of facts. 2026-04-07 04:10:44.830197 | orchestrator | 2026-04-07 04:10:44 | INFO  | Task 6c815dd4-4708-4155-b970-04c8acebc6c4 (facts) was prepared for execution. 2026-04-07 04:10:44.830284 | orchestrator | 2026-04-07 04:10:44 | INFO  | It takes a moment until task 6c815dd4-4708-4155-b970-04c8acebc6c4 (facts) has been started and output is visible here. 2026-04-07 04:11:04.786384 | orchestrator | 2026-04-07 04:11:04.786461 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-07 04:11:04.786468 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-07 04:11:04.786501 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-07 04:11:04.786510 | orchestrator | 2026-04-07 04:11:04.786515 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-07 04:11:04.786519 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-07 04:11:04.786523 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-07 04:11:04.786532 | orchestrator | Tuesday 07 April 2026 04:10:50 +0000 (0:00:01.666) 0:00:01.666 ********* 2026-04-07 04:11:04.786536 | orchestrator | ok: [testbed-manager] 2026-04-07 04:11:04.786541 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:11:04.786545 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:11:04.786548 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:11:04.786552 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:11:04.786556 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:11:04.786560 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:11:04.786564 | orchestrator | 2026-04-07 04:11:04.786568 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-07 04:11:04.786572 | orchestrator | Tuesday 07 April 2026 04:10:52 +0000 (0:00:02.438) 0:00:04.104 ********* 2026-04-07 04:11:04.786576 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:11:04.786580 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:11:04.786584 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:11:04.786588 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:11:04.786592 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:11:04.786595 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:11:04.786599 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:11:04.786603 | orchestrator | 2026-04-07 04:11:04.786607 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-07 04:11:04.786626 | orchestrator | 2026-04-07 04:11:04.786630 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-07 04:11:04.786634 | orchestrator | Tuesday 07 April 2026 04:10:55 +0000 (0:00:02.230) 0:00:06.334 ********* 2026-04-07 04:11:04.786638 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:11:04.786641 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:11:04.786645 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:11:04.786649 | orchestrator | ok: [testbed-manager] 2026-04-07 04:11:04.786653 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:11:04.786656 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:11:04.786660 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:11:04.786664 | orchestrator | 2026-04-07 04:11:04.786668 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-07 04:11:04.786671 | orchestrator | 2026-04-07 04:11:04.786675 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-07 04:11:04.786679 | orchestrator | Tuesday 07 April 2026 04:11:02 +0000 (0:00:07.089) 0:00:13.424 ********* 2026-04-07 04:11:04.786683 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:11:04.786687 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:11:04.786690 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:11:04.786694 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:11:04.786698 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:11:04.786702 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:11:04.786705 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:11:04.786709 | orchestrator | 2026-04-07 04:11:04.786713 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:11:04.786717 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 04:11:04.786722 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 04:11:04.786726 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 04:11:04.786730 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 04:11:04.786734 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 04:11:04.786738 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 04:11:04.786742 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-07 04:11:04.786745 | orchestrator | 2026-04-07 04:11:04.786749 | orchestrator | 2026-04-07 04:11:04.786753 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:11:04.786757 | orchestrator | Tuesday 07 April 2026 04:11:04 +0000 (0:00:02.259) 0:00:15.684 ********* 2026-04-07 04:11:04.786761 | orchestrator | =============================================================================== 2026-04-07 04:11:04.786776 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.09s 2026-04-07 04:11:04.786780 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.44s 2026-04-07 04:11:04.786784 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.26s 2026-04-07 04:11:04.786788 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.23s 2026-04-07 04:11:05.017811 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-07 04:11:05.114353 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 04:11:05.115001 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-07 04:11:05.148739 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-04-07 04:11:05.148838 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-04-07 04:11:05.156549 | orchestrator | + set -e 2026-04-07 04:11:05.156634 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-04-07 04:11:05.156647 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-07 04:11:05.163028 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-04-07 04:11:05.169218 | orchestrator | 2026-04-07 04:11:05.169304 | orchestrator | # UPGRADE SERVICES 2026-04-07 04:11:05.169327 | orchestrator | 2026-04-07 04:11:05.169354 | orchestrator | + set -e 2026-04-07 04:11:05.169379 | orchestrator | + echo 2026-04-07 04:11:05.169398 | orchestrator | + echo '# UPGRADE SERVICES' 2026-04-07 04:11:05.169417 | orchestrator | + echo 2026-04-07 04:11:05.169435 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 04:11:05.170753 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 04:11:05.170830 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 04:11:05.170843 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 04:11:05.170854 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 04:11:05.170866 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 04:11:05.170878 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 04:11:05.170890 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 04:11:05.170901 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 04:11:05.170912 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 04:11:05.170923 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 04:11:05.170934 | orchestrator | ++ export ARA=false 2026-04-07 04:11:05.170945 | orchestrator | ++ ARA=false 2026-04-07 04:11:05.170956 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 04:11:05.170966 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 04:11:05.170977 | orchestrator | ++ export TEMPEST=false 2026-04-07 04:11:05.170988 | orchestrator | ++ TEMPEST=false 2026-04-07 04:11:05.170999 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 04:11:05.171010 | orchestrator | ++ IS_ZUUL=true 2026-04-07 04:11:05.171021 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 04:11:05.171032 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 04:11:05.171043 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 04:11:05.171054 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 04:11:05.171065 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 04:11:05.171076 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 04:11:05.171087 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 04:11:05.171104 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 04:11:05.171121 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 04:11:05.171136 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 04:11:05.171152 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-07 04:11:05.171168 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-07 04:11:05.171187 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-04-07 04:11:05.171206 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-04-07 04:11:05.171225 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-07 04:11:05.176372 | orchestrator | + set -e 2026-04-07 04:11:05.176445 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 04:11:05.177109 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 04:11:05.177227 | orchestrator | ++ INTERACTIVE=false 2026-04-07 04:11:05.177241 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 04:11:05.177251 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 04:11:05.177385 | orchestrator | + source /opt/manager-vars.sh 2026-04-07 04:11:05.177399 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-07 04:11:05.177408 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-07 04:11:05.177417 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-07 04:11:05.177426 | orchestrator | ++ CEPH_VERSION=reef 2026-04-07 04:11:05.177436 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-07 04:11:05.177455 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-07 04:11:05.177464 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-07 04:11:05.177518 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-07 04:11:05.177540 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-07 04:11:05.177550 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-07 04:11:05.177560 | orchestrator | ++ export ARA=false 2026-04-07 04:11:05.177570 | orchestrator | ++ ARA=false 2026-04-07 04:11:05.177580 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-07 04:11:05.177590 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-07 04:11:05.177600 | orchestrator | ++ export TEMPEST=false 2026-04-07 04:11:05.177610 | orchestrator | ++ TEMPEST=false 2026-04-07 04:11:05.177627 | orchestrator | ++ export IS_ZUUL=true 2026-04-07 04:11:05.177636 | orchestrator | ++ IS_ZUUL=true 2026-04-07 04:11:05.177786 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 04:11:05.177848 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.132 2026-04-07 04:11:05.177860 | orchestrator | ++ export EXTERNAL_API=false 2026-04-07 04:11:05.177869 | orchestrator | ++ EXTERNAL_API=false 2026-04-07 04:11:05.177877 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-07 04:11:05.177886 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-07 04:11:05.177895 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-07 04:11:05.177904 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-07 04:11:05.177914 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-07 04:11:05.177931 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-07 04:11:05.177941 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-07 04:11:05.177950 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-07 04:11:05.177959 | orchestrator | + echo 2026-04-07 04:11:05.178119 | orchestrator | 2026-04-07 04:11:05.178140 | orchestrator | # PULL IMAGES 2026-04-07 04:11:05.178149 | orchestrator | 2026-04-07 04:11:05.178158 | orchestrator | + echo '# PULL IMAGES' 2026-04-07 04:11:05.178166 | orchestrator | + echo 2026-04-07 04:11:05.179979 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-07 04:11:05.248853 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 04:11:05.248920 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-07 04:11:06.836391 | orchestrator | 2026-04-07 04:11:06 | INFO  | Trying to run play pull-images in environment custom 2026-04-07 04:11:16.984538 | orchestrator | 2026-04-07 04:11:16 | INFO  | Prepare task for execution of pull-images. 2026-04-07 04:11:17.078720 | orchestrator | 2026-04-07 04:11:17 | INFO  | Task 315b3d7d-5467-4019-88c6-773d17da9fcd (pull-images) was prepared for execution. 2026-04-07 04:11:17.078806 | orchestrator | 2026-04-07 04:11:17 | INFO  | Task 315b3d7d-5467-4019-88c6-773d17da9fcd is running in background. No more output. Check ARA for logs. 2026-04-07 04:11:17.345080 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-04-07 04:11:17.359650 | orchestrator | + set -e 2026-04-07 04:11:17.359720 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 04:11:17.359727 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 04:11:17.359733 | orchestrator | ++ INTERACTIVE=false 2026-04-07 04:11:17.359763 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 04:11:17.360099 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 04:11:17.360121 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-07 04:11:17.362771 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-07 04:11:17.376864 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-07 04:11:17.376958 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-07 04:11:17.377021 | orchestrator | ++ semver 10.0.0 8.0.3 2026-04-07 04:11:17.437946 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-07 04:11:17.438083 | orchestrator | + osism apply frr 2026-04-07 04:11:29.036712 | orchestrator | 2026-04-07 04:11:29 | INFO  | Prepare task for execution of frr. 2026-04-07 04:11:29.146992 | orchestrator | 2026-04-07 04:11:29 | INFO  | Task 73693ddc-6f87-47b3-b294-11dcc260aa8a (frr) was prepared for execution. 2026-04-07 04:11:29.147059 | orchestrator | 2026-04-07 04:11:29 | INFO  | It takes a moment until task 73693ddc-6f87-47b3-b294-11dcc260aa8a (frr) has been started and output is visible here. 2026-04-07 04:12:09.669507 | orchestrator | 2026-04-07 04:12:09.669623 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-07 04:12:09.669645 | orchestrator | 2026-04-07 04:12:09.669657 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-07 04:12:09.669669 | orchestrator | Tuesday 07 April 2026 04:11:37 +0000 (0:00:03.731) 0:00:03.731 ********* 2026-04-07 04:12:09.669695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 04:12:09.669712 | orchestrator | 2026-04-07 04:12:09.669727 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-07 04:12:09.669741 | orchestrator | Tuesday 07 April 2026 04:11:41 +0000 (0:00:03.714) 0:00:07.446 ********* 2026-04-07 04:12:09.669755 | orchestrator | ok: [testbed-manager] 2026-04-07 04:12:09.669770 | orchestrator | 2026-04-07 04:12:09.669779 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-07 04:12:09.669808 | orchestrator | Tuesday 07 April 2026 04:11:43 +0000 (0:00:02.545) 0:00:09.991 ********* 2026-04-07 04:12:09.669816 | orchestrator | ok: [testbed-manager] 2026-04-07 04:12:09.669824 | orchestrator | 2026-04-07 04:12:09.669832 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-07 04:12:09.669841 | orchestrator | Tuesday 07 April 2026 04:11:47 +0000 (0:00:03.256) 0:00:13.248 ********* 2026-04-07 04:12:09.669848 | orchestrator | ok: [testbed-manager] 2026-04-07 04:12:09.669860 | orchestrator | 2026-04-07 04:12:09.669879 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-07 04:12:09.669892 | orchestrator | Tuesday 07 April 2026 04:11:49 +0000 (0:00:02.094) 0:00:15.343 ********* 2026-04-07 04:12:09.669906 | orchestrator | ok: [testbed-manager] 2026-04-07 04:12:09.669919 | orchestrator | 2026-04-07 04:12:09.669932 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-07 04:12:09.669945 | orchestrator | Tuesday 07 April 2026 04:11:51 +0000 (0:00:02.096) 0:00:17.439 ********* 2026-04-07 04:12:09.669953 | orchestrator | ok: [testbed-manager] 2026-04-07 04:12:09.669961 | orchestrator | 2026-04-07 04:12:09.669969 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-07 04:12:09.669977 | orchestrator | Tuesday 07 April 2026 04:11:53 +0000 (0:00:02.710) 0:00:20.150 ********* 2026-04-07 04:12:09.669985 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:12:09.669994 | orchestrator | 2026-04-07 04:12:09.670004 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-07 04:12:09.670013 | orchestrator | Tuesday 07 April 2026 04:11:55 +0000 (0:00:01.217) 0:00:21.368 ********* 2026-04-07 04:12:09.670076 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:12:09.670086 | orchestrator | 2026-04-07 04:12:09.670098 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-07 04:12:09.670113 | orchestrator | Tuesday 07 April 2026 04:11:56 +0000 (0:00:01.188) 0:00:22.557 ********* 2026-04-07 04:12:09.670127 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:12:09.670142 | orchestrator | 2026-04-07 04:12:09.670156 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-07 04:12:09.670169 | orchestrator | Tuesday 07 April 2026 04:11:57 +0000 (0:00:01.503) 0:00:24.060 ********* 2026-04-07 04:12:09.670183 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:12:09.670197 | orchestrator | 2026-04-07 04:12:09.670211 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-07 04:12:09.670224 | orchestrator | Tuesday 07 April 2026 04:11:59 +0000 (0:00:01.321) 0:00:25.382 ********* 2026-04-07 04:12:09.670237 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:12:09.670252 | orchestrator | 2026-04-07 04:12:09.670265 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-07 04:12:09.670278 | orchestrator | Tuesday 07 April 2026 04:12:00 +0000 (0:00:01.258) 0:00:26.641 ********* 2026-04-07 04:12:09.670291 | orchestrator | ok: [testbed-manager] 2026-04-07 04:12:09.670305 | orchestrator | 2026-04-07 04:12:09.670318 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-07 04:12:09.670333 | orchestrator | Tuesday 07 April 2026 04:12:02 +0000 (0:00:02.304) 0:00:28.945 ********* 2026-04-07 04:12:09.670346 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-07 04:12:09.670361 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-07 04:12:09.670377 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-07 04:12:09.670391 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-07 04:12:09.670443 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-07 04:12:09.670458 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-07 04:12:09.670472 | orchestrator | 2026-04-07 04:12:09.670486 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-07 04:12:09.670515 | orchestrator | Tuesday 07 April 2026 04:12:06 +0000 (0:00:03.832) 0:00:32.778 ********* 2026-04-07 04:12:09.670529 | orchestrator | ok: [testbed-manager] 2026-04-07 04:12:09.670543 | orchestrator | 2026-04-07 04:12:09.670558 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:12:09.670573 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-07 04:12:09.670587 | orchestrator | 2026-04-07 04:12:09.670602 | orchestrator | 2026-04-07 04:12:09.670617 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:12:09.670626 | orchestrator | Tuesday 07 April 2026 04:12:09 +0000 (0:00:02.747) 0:00:35.526 ********* 2026-04-07 04:12:09.670634 | orchestrator | =============================================================================== 2026-04-07 04:12:09.670661 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.83s 2026-04-07 04:12:09.670669 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 3.72s 2026-04-07 04:12:09.670677 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.26s 2026-04-07 04:12:09.670685 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.75s 2026-04-07 04:12:09.670693 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.71s 2026-04-07 04:12:09.670701 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.54s 2026-04-07 04:12:09.670709 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.30s 2026-04-07 04:12:09.670717 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 2.10s 2026-04-07 04:12:09.670725 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 2.09s 2026-04-07 04:12:09.670732 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 1.50s 2026-04-07 04:12:09.670740 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.32s 2026-04-07 04:12:09.670748 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.26s 2026-04-07 04:12:09.670756 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 1.22s 2026-04-07 04:12:09.670764 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 1.19s 2026-04-07 04:12:09.909670 | orchestrator | + osism apply kubernetes 2026-04-07 04:12:11.404637 | orchestrator | 2026-04-07 04:12:11 | INFO  | Prepare task for execution of kubernetes. 2026-04-07 04:12:11.472993 | orchestrator | 2026-04-07 04:12:11 | INFO  | Task 2e4bbb61-f469-4766-a87a-3a240e8b26cf (kubernetes) was prepared for execution. 2026-04-07 04:12:11.473099 | orchestrator | 2026-04-07 04:12:11 | INFO  | It takes a moment until task 2e4bbb61-f469-4766-a87a-3a240e8b26cf (kubernetes) has been started and output is visible here. 2026-04-07 04:12:58.504090 | orchestrator | 2026-04-07 04:12:58.504198 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-07 04:12:58.504215 | orchestrator | 2026-04-07 04:12:58.504228 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-07 04:12:58.504240 | orchestrator | Tuesday 07 April 2026 04:12:18 +0000 (0:00:02.418) 0:00:02.418 ********* 2026-04-07 04:12:58.504251 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:12:58.504263 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:12:58.504274 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:12:58.504285 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:12:58.504296 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:12:58.504306 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:12:58.504317 | orchestrator | 2026-04-07 04:12:58.504328 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-07 04:12:58.504339 | orchestrator | Tuesday 07 April 2026 04:12:22 +0000 (0:00:04.507) 0:00:06.925 ********* 2026-04-07 04:12:58.504404 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.504445 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.504460 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.504471 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.504482 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.504492 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.504503 | orchestrator | 2026-04-07 04:12:58.504521 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-07 04:12:58.504544 | orchestrator | Tuesday 07 April 2026 04:12:24 +0000 (0:00:02.259) 0:00:09.184 ********* 2026-04-07 04:12:58.504571 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.504588 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.504606 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.504623 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.504639 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.504656 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.504672 | orchestrator | 2026-04-07 04:12:58.504689 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-07 04:12:58.504707 | orchestrator | Tuesday 07 April 2026 04:12:27 +0000 (0:00:02.332) 0:00:11.516 ********* 2026-04-07 04:12:58.504725 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:12:58.504743 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:12:58.504762 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:12:58.504780 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:12:58.504799 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:12:58.504818 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:12:58.504836 | orchestrator | 2026-04-07 04:12:58.504856 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-07 04:12:58.504876 | orchestrator | Tuesday 07 April 2026 04:12:29 +0000 (0:00:02.800) 0:00:14.317 ********* 2026-04-07 04:12:58.504891 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:12:58.504904 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:12:58.504917 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:12:58.504930 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:12:58.504942 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:12:58.504954 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:12:58.504965 | orchestrator | 2026-04-07 04:12:58.504976 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-07 04:12:58.504987 | orchestrator | Tuesday 07 April 2026 04:12:32 +0000 (0:00:02.498) 0:00:16.815 ********* 2026-04-07 04:12:58.504998 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:12:58.505009 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:12:58.505019 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:12:58.505030 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:12:58.505041 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:12:58.505051 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:12:58.505062 | orchestrator | 2026-04-07 04:12:58.505073 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-07 04:12:58.505084 | orchestrator | Tuesday 07 April 2026 04:12:35 +0000 (0:00:02.572) 0:00:19.388 ********* 2026-04-07 04:12:58.505095 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.505106 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.505117 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.505127 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.505138 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.505149 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.505160 | orchestrator | 2026-04-07 04:12:58.505171 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-07 04:12:58.505182 | orchestrator | Tuesday 07 April 2026 04:12:37 +0000 (0:00:02.462) 0:00:21.850 ********* 2026-04-07 04:12:58.505193 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.505203 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.505214 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.505225 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.505235 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.505276 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.505298 | orchestrator | 2026-04-07 04:12:58.505309 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-07 04:12:58.505320 | orchestrator | Tuesday 07 April 2026 04:12:39 +0000 (0:00:02.273) 0:00:24.124 ********* 2026-04-07 04:12:58.505331 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 04:12:58.505368 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 04:12:58.505381 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.505392 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 04:12:58.505403 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 04:12:58.505414 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.505425 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 04:12:58.505436 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 04:12:58.505447 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.505457 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 04:12:58.505468 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 04:12:58.505479 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.505510 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 04:12:58.505521 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 04:12:58.505532 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.505543 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-07 04:12:58.505553 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-07 04:12:58.505564 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.505575 | orchestrator | 2026-04-07 04:12:58.505586 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-07 04:12:58.505596 | orchestrator | Tuesday 07 April 2026 04:12:42 +0000 (0:00:02.316) 0:00:26.441 ********* 2026-04-07 04:12:58.505607 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.505618 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.505628 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.505639 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.505650 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.505661 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.505671 | orchestrator | 2026-04-07 04:12:58.505682 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-07 04:12:58.505694 | orchestrator | Tuesday 07 April 2026 04:12:44 +0000 (0:00:02.340) 0:00:28.781 ********* 2026-04-07 04:12:58.505705 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:12:58.505716 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:12:58.505727 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:12:58.505737 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:12:58.505748 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:12:58.505758 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:12:58.505769 | orchestrator | 2026-04-07 04:12:58.505780 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-07 04:12:58.505791 | orchestrator | Tuesday 07 April 2026 04:12:46 +0000 (0:00:01.944) 0:00:30.725 ********* 2026-04-07 04:12:58.505801 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:12:58.505812 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:12:58.505823 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:12:58.505833 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:12:58.505844 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:12:58.505855 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:12:58.505869 | orchestrator | 2026-04-07 04:12:58.505881 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-07 04:12:58.505900 | orchestrator | Tuesday 07 April 2026 04:12:49 +0000 (0:00:02.782) 0:00:33.508 ********* 2026-04-07 04:12:58.505917 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.505954 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.505982 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.506000 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.506093 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.506117 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.506138 | orchestrator | 2026-04-07 04:12:58.506158 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-07 04:12:58.506180 | orchestrator | Tuesday 07 April 2026 04:12:51 +0000 (0:00:02.121) 0:00:35.630 ********* 2026-04-07 04:12:58.506193 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.506204 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.506215 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.506225 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.506236 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.506247 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.506258 | orchestrator | 2026-04-07 04:12:58.506269 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-07 04:12:58.506281 | orchestrator | Tuesday 07 April 2026 04:12:53 +0000 (0:00:02.299) 0:00:37.929 ********* 2026-04-07 04:12:58.506292 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.506303 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.506314 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.506324 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.506335 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.506419 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.506432 | orchestrator | 2026-04-07 04:12:58.506443 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-07 04:12:58.506454 | orchestrator | Tuesday 07 April 2026 04:12:55 +0000 (0:00:02.165) 0:00:40.095 ********* 2026-04-07 04:12:58.506465 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-07 04:12:58.506476 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-07 04:12:58.506487 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.506498 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-07 04:12:58.506509 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-07 04:12:58.506519 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.506530 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-07 04:12:58.506541 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-07 04:12:58.506551 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:12:58.506562 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-07 04:12:58.506573 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-07 04:12:58.506584 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:12:58.506602 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-07 04:12:58.506614 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-07 04:12:58.506624 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:12:58.506635 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-07 04:12:58.506646 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-07 04:12:58.506657 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:12:58.506668 | orchestrator | 2026-04-07 04:12:58.506679 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-07 04:12:58.506690 | orchestrator | Tuesday 07 April 2026 04:12:57 +0000 (0:00:02.121) 0:00:42.217 ********* 2026-04-07 04:12:58.506701 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:12:58.506712 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:12:58.506734 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:14:43.696856 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:14:43.696916 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.696921 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.696925 | orchestrator | 2026-04-07 04:14:43.696931 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-07 04:14:43.696937 | orchestrator | Tuesday 07 April 2026 04:13:00 +0000 (0:00:02.268) 0:00:44.486 ********* 2026-04-07 04:14:43.696941 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:14:43.696945 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:14:43.696949 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:14:43.696953 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:14:43.696957 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.696961 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.696965 | orchestrator | 2026-04-07 04:14:43.696969 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-07 04:14:43.696973 | orchestrator | 2026-04-07 04:14:43.696977 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-07 04:14:43.696981 | orchestrator | Tuesday 07 April 2026 04:13:03 +0000 (0:00:03.535) 0:00:48.022 ********* 2026-04-07 04:14:43.696985 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.696990 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.696994 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.696997 | orchestrator | 2026-04-07 04:14:43.697001 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-07 04:14:43.697005 | orchestrator | Tuesday 07 April 2026 04:13:07 +0000 (0:00:03.693) 0:00:51.715 ********* 2026-04-07 04:14:43.697009 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697012 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697016 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697020 | orchestrator | 2026-04-07 04:14:43.697024 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-07 04:14:43.697028 | orchestrator | Tuesday 07 April 2026 04:13:10 +0000 (0:00:02.752) 0:00:54.468 ********* 2026-04-07 04:14:43.697032 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:14:43.697036 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:14:43.697039 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:14:43.697043 | orchestrator | 2026-04-07 04:14:43.697047 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-07 04:14:43.697051 | orchestrator | Tuesday 07 April 2026 04:13:12 +0000 (0:00:02.196) 0:00:56.664 ********* 2026-04-07 04:14:43.697055 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697058 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697062 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697066 | orchestrator | 2026-04-07 04:14:43.697069 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-07 04:14:43.697073 | orchestrator | Tuesday 07 April 2026 04:13:14 +0000 (0:00:01.817) 0:00:58.482 ********* 2026-04-07 04:14:43.697077 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:14:43.697081 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.697085 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.697088 | orchestrator | 2026-04-07 04:14:43.697092 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-07 04:14:43.697096 | orchestrator | Tuesday 07 April 2026 04:13:15 +0000 (0:00:01.673) 0:01:00.155 ********* 2026-04-07 04:14:43.697100 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697103 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697107 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697111 | orchestrator | 2026-04-07 04:14:43.697115 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-07 04:14:43.697119 | orchestrator | Tuesday 07 April 2026 04:13:18 +0000 (0:00:02.221) 0:01:02.377 ********* 2026-04-07 04:14:43.697122 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697126 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697130 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697147 | orchestrator | 2026-04-07 04:14:43.697151 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-07 04:14:43.697155 | orchestrator | Tuesday 07 April 2026 04:13:20 +0000 (0:00:02.456) 0:01:04.833 ********* 2026-04-07 04:14:43.697159 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:14:43.697162 | orchestrator | 2026-04-07 04:14:43.697166 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-07 04:14:43.697170 | orchestrator | Tuesday 07 April 2026 04:13:22 +0000 (0:00:02.011) 0:01:06.845 ********* 2026-04-07 04:14:43.697174 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697177 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697181 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697185 | orchestrator | 2026-04-07 04:14:43.697188 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-07 04:14:43.697192 | orchestrator | Tuesday 07 April 2026 04:13:25 +0000 (0:00:03.058) 0:01:09.903 ********* 2026-04-07 04:14:43.697196 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.697200 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.697204 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697207 | orchestrator | 2026-04-07 04:14:43.697211 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-07 04:14:43.697215 | orchestrator | Tuesday 07 April 2026 04:13:27 +0000 (0:00:01.738) 0:01:11.642 ********* 2026-04-07 04:14:43.697219 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.697222 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.697226 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:14:43.697230 | orchestrator | 2026-04-07 04:14:43.697254 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-07 04:14:43.697258 | orchestrator | Tuesday 07 April 2026 04:13:29 +0000 (0:00:01.967) 0:01:13.609 ********* 2026-04-07 04:14:43.697262 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.697266 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.697270 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:14:43.697273 | orchestrator | 2026-04-07 04:14:43.697277 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-07 04:14:43.697281 | orchestrator | Tuesday 07 April 2026 04:13:31 +0000 (0:00:02.661) 0:01:16.271 ********* 2026-04-07 04:14:43.697285 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:14:43.697289 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.697300 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.697304 | orchestrator | 2026-04-07 04:14:43.697308 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-07 04:14:43.697318 | orchestrator | Tuesday 07 April 2026 04:13:33 +0000 (0:00:01.608) 0:01:17.880 ********* 2026-04-07 04:14:43.697322 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:14:43.697326 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.697330 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.697334 | orchestrator | 2026-04-07 04:14:43.697338 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-07 04:14:43.697341 | orchestrator | Tuesday 07 April 2026 04:13:35 +0000 (0:00:01.609) 0:01:19.489 ********* 2026-04-07 04:14:43.697345 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:14:43.697349 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:14:43.697352 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:14:43.697356 | orchestrator | 2026-04-07 04:14:43.697360 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-07 04:14:43.697364 | orchestrator | Tuesday 07 April 2026 04:13:37 +0000 (0:00:02.578) 0:01:22.068 ********* 2026-04-07 04:14:43.697367 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697371 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697375 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697379 | orchestrator | 2026-04-07 04:14:43.697382 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-07 04:14:43.697390 | orchestrator | Tuesday 07 April 2026 04:13:40 +0000 (0:00:02.380) 0:01:24.449 ********* 2026-04-07 04:14:43.697394 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697398 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697402 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697405 | orchestrator | 2026-04-07 04:14:43.697409 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-07 04:14:43.697413 | orchestrator | Tuesday 07 April 2026 04:13:41 +0000 (0:00:01.604) 0:01:26.054 ********* 2026-04-07 04:14:43.697417 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 04:14:43.697423 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 04:14:43.697426 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-07 04:14:43.697430 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 04:14:43.697434 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 04:14:43.697438 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-07 04:14:43.697442 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697446 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697449 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697453 | orchestrator | 2026-04-07 04:14:43.697457 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-07 04:14:43.697461 | orchestrator | Tuesday 07 April 2026 04:14:05 +0000 (0:00:23.341) 0:01:49.395 ********* 2026-04-07 04:14:43.697465 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:14:43.697468 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:14:43.697472 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:14:43.697476 | orchestrator | 2026-04-07 04:14:43.697480 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-07 04:14:43.697483 | orchestrator | Tuesday 07 April 2026 04:14:06 +0000 (0:00:01.688) 0:01:51.084 ********* 2026-04-07 04:14:43.697487 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:14:43.697491 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:14:43.697495 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:14:43.697498 | orchestrator | 2026-04-07 04:14:43.697502 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-07 04:14:43.697506 | orchestrator | Tuesday 07 April 2026 04:14:09 +0000 (0:00:02.648) 0:01:53.733 ********* 2026-04-07 04:14:43.697509 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697513 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697517 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697521 | orchestrator | 2026-04-07 04:14:43.697524 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-07 04:14:43.697528 | orchestrator | Tuesday 07 April 2026 04:14:11 +0000 (0:00:02.326) 0:01:56.059 ********* 2026-04-07 04:14:43.697532 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:14:43.697535 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:14:43.697539 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:14:43.697543 | orchestrator | 2026-04-07 04:14:43.697547 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-07 04:14:43.697550 | orchestrator | Tuesday 07 April 2026 04:14:39 +0000 (0:00:27.597) 0:02:23.657 ********* 2026-04-07 04:14:43.697554 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697558 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697562 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697565 | orchestrator | 2026-04-07 04:14:43.697569 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-07 04:14:43.697581 | orchestrator | Tuesday 07 April 2026 04:14:41 +0000 (0:00:01.934) 0:02:25.591 ********* 2026-04-07 04:14:43.697584 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:14:43.697588 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:14:43.697592 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:14:43.697596 | orchestrator | 2026-04-07 04:14:43.697599 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-07 04:14:43.697603 | orchestrator | Tuesday 07 April 2026 04:14:42 +0000 (0:00:01.740) 0:02:27.332 ********* 2026-04-07 04:14:43.697607 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:14:43.697611 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:14:43.697614 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:14:43.697618 | orchestrator | 2026-04-07 04:14:43.697624 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-07 04:15:36.985211 | orchestrator | Tuesday 07 April 2026 04:14:44 +0000 (0:00:01.722) 0:02:29.054 ********* 2026-04-07 04:15:36.985309 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:15:36.985323 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:15:36.985333 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:15:36.985341 | orchestrator | 2026-04-07 04:15:36.985352 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-07 04:15:36.985363 | orchestrator | Tuesday 07 April 2026 04:14:46 +0000 (0:00:02.006) 0:02:31.061 ********* 2026-04-07 04:15:36.985374 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:15:36.985381 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:15:36.985386 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:15:36.985392 | orchestrator | 2026-04-07 04:15:36.985399 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-07 04:15:36.985405 | orchestrator | Tuesday 07 April 2026 04:14:48 +0000 (0:00:01.657) 0:02:32.719 ********* 2026-04-07 04:15:36.985415 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:15:36.985425 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:15:36.985434 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:15:36.985444 | orchestrator | 2026-04-07 04:15:36.985454 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-07 04:15:36.985464 | orchestrator | Tuesday 07 April 2026 04:14:50 +0000 (0:00:01.721) 0:02:34.441 ********* 2026-04-07 04:15:36.985473 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:15:36.985483 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:15:36.985488 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:15:36.985494 | orchestrator | 2026-04-07 04:15:36.985501 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-07 04:15:36.985507 | orchestrator | Tuesday 07 April 2026 04:14:51 +0000 (0:00:01.828) 0:02:36.269 ********* 2026-04-07 04:15:36.985513 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:15:36.985519 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:15:36.985525 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:15:36.985533 | orchestrator | 2026-04-07 04:15:36.985543 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-07 04:15:36.985552 | orchestrator | Tuesday 07 April 2026 04:14:53 +0000 (0:00:01.904) 0:02:38.173 ********* 2026-04-07 04:15:36.985562 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:15:36.985572 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:15:36.985578 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:15:36.985583 | orchestrator | 2026-04-07 04:15:36.985589 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-07 04:15:36.985595 | orchestrator | Tuesday 07 April 2026 04:14:56 +0000 (0:00:02.409) 0:02:40.583 ********* 2026-04-07 04:15:36.985601 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:15:36.985607 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:15:36.985613 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:15:36.985621 | orchestrator | 2026-04-07 04:15:36.985630 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-07 04:15:36.985639 | orchestrator | Tuesday 07 April 2026 04:14:57 +0000 (0:00:01.511) 0:02:42.095 ********* 2026-04-07 04:15:36.985668 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:15:36.985677 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:15:36.985686 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:15:36.985695 | orchestrator | 2026-04-07 04:15:36.985704 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-07 04:15:36.985713 | orchestrator | Tuesday 07 April 2026 04:14:59 +0000 (0:00:01.549) 0:02:43.644 ********* 2026-04-07 04:15:36.985721 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:15:36.985731 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:15:36.985742 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:15:36.985751 | orchestrator | 2026-04-07 04:15:36.985759 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-07 04:15:36.985768 | orchestrator | Tuesday 07 April 2026 04:15:01 +0000 (0:00:01.941) 0:02:45.585 ********* 2026-04-07 04:15:36.985776 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:15:36.985784 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:15:36.985794 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:15:36.985802 | orchestrator | 2026-04-07 04:15:36.985811 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-07 04:15:36.985838 | orchestrator | Tuesday 07 April 2026 04:15:03 +0000 (0:00:01.802) 0:02:47.388 ********* 2026-04-07 04:15:36.985848 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 04:15:36.985868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 04:15:36.985876 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-07 04:15:36.985884 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 04:15:36.985895 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 04:15:36.985915 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-07 04:15:36.985929 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 04:15:36.985939 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 04:15:36.985948 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-07 04:15:36.985958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-07 04:15:36.985967 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 04:15:36.985976 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 04:15:36.986001 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 04:15:36.986096 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-07 04:15:36.986107 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 04:15:36.986118 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 04:15:36.986136 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-07 04:15:36.986145 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 04:15:36.986153 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-07 04:15:36.986164 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-07 04:15:36.986178 | orchestrator | 2026-04-07 04:15:36.986209 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-07 04:15:36.986227 | orchestrator | 2026-04-07 04:15:36.986238 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-07 04:15:36.986247 | orchestrator | Tuesday 07 April 2026 04:15:07 +0000 (0:00:04.728) 0:02:52.117 ********* 2026-04-07 04:15:36.986256 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:15:36.986266 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:15:36.986275 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:15:36.986288 | orchestrator | 2026-04-07 04:15:36.986301 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-07 04:15:36.986310 | orchestrator | Tuesday 07 April 2026 04:15:09 +0000 (0:00:01.738) 0:02:53.855 ********* 2026-04-07 04:15:36.986318 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:15:36.986327 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:15:36.986336 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:15:36.986345 | orchestrator | 2026-04-07 04:15:36.986355 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-07 04:15:36.986364 | orchestrator | Tuesday 07 April 2026 04:15:11 +0000 (0:00:02.235) 0:02:56.091 ********* 2026-04-07 04:15:36.986373 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:15:36.986382 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:15:36.986397 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:15:36.986407 | orchestrator | 2026-04-07 04:15:36.986416 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-07 04:15:36.986425 | orchestrator | Tuesday 07 April 2026 04:15:13 +0000 (0:00:01.530) 0:02:57.621 ********* 2026-04-07 04:15:36.986434 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 04:15:36.986443 | orchestrator | 2026-04-07 04:15:36.986452 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-07 04:15:36.986461 | orchestrator | Tuesday 07 April 2026 04:15:15 +0000 (0:00:02.114) 0:02:59.736 ********* 2026-04-07 04:15:36.986470 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:15:36.986480 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:15:36.986490 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:15:36.986499 | orchestrator | 2026-04-07 04:15:36.986509 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-07 04:15:36.986519 | orchestrator | Tuesday 07 April 2026 04:15:16 +0000 (0:00:01.491) 0:03:01.228 ********* 2026-04-07 04:15:36.986529 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:15:36.986540 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:15:36.986546 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:15:36.986551 | orchestrator | 2026-04-07 04:15:36.986557 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-07 04:15:36.986563 | orchestrator | Tuesday 07 April 2026 04:15:18 +0000 (0:00:01.500) 0:03:02.728 ********* 2026-04-07 04:15:36.986569 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:15:36.986574 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:15:36.986580 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:15:36.986586 | orchestrator | 2026-04-07 04:15:36.986591 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-07 04:15:36.986597 | orchestrator | Tuesday 07 April 2026 04:15:19 +0000 (0:00:01.431) 0:03:04.160 ********* 2026-04-07 04:15:36.986603 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:15:36.986608 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:15:36.986614 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:15:36.986620 | orchestrator | 2026-04-07 04:15:36.986625 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-07 04:15:36.986631 | orchestrator | Tuesday 07 April 2026 04:15:21 +0000 (0:00:01.879) 0:03:06.039 ********* 2026-04-07 04:15:36.986637 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:15:36.986642 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:15:36.986648 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:15:36.986654 | orchestrator | 2026-04-07 04:15:36.986659 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-07 04:15:36.986665 | orchestrator | Tuesday 07 April 2026 04:15:24 +0000 (0:00:02.361) 0:03:08.400 ********* 2026-04-07 04:15:36.986678 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:15:36.986684 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:15:36.986690 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:15:36.986696 | orchestrator | 2026-04-07 04:15:36.986706 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-07 04:15:36.986712 | orchestrator | Tuesday 07 April 2026 04:15:26 +0000 (0:00:02.468) 0:03:10.869 ********* 2026-04-07 04:15:36.986718 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:15:36.986724 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:15:36.986729 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:15:36.986735 | orchestrator | 2026-04-07 04:15:36.986741 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-07 04:15:36.986747 | orchestrator | 2026-04-07 04:15:36.986752 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-07 04:15:36.986758 | orchestrator | Tuesday 07 April 2026 04:15:34 +0000 (0:00:08.168) 0:03:19.037 ********* 2026-04-07 04:15:36.986764 | orchestrator | ok: [testbed-manager] 2026-04-07 04:15:36.986769 | orchestrator | 2026-04-07 04:15:36.986775 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-07 04:15:36.986789 | orchestrator | Tuesday 07 April 2026 04:15:36 +0000 (0:00:02.307) 0:03:21.344 ********* 2026-04-07 04:16:53.615681 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.615797 | orchestrator | 2026-04-07 04:16:53.615822 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-07 04:16:53.615889 | orchestrator | Tuesday 07 April 2026 04:15:38 +0000 (0:00:01.502) 0:03:22.847 ********* 2026-04-07 04:16:53.615900 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-07 04:16:53.615908 | orchestrator | 2026-04-07 04:16:53.615917 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-07 04:16:53.615925 | orchestrator | Tuesday 07 April 2026 04:15:40 +0000 (0:00:01.615) 0:03:24.463 ********* 2026-04-07 04:16:53.615933 | orchestrator | changed: [testbed-manager] 2026-04-07 04:16:53.615940 | orchestrator | 2026-04-07 04:16:53.615948 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-07 04:16:53.615956 | orchestrator | Tuesday 07 April 2026 04:15:42 +0000 (0:00:02.101) 0:03:26.565 ********* 2026-04-07 04:16:53.615963 | orchestrator | changed: [testbed-manager] 2026-04-07 04:16:53.615971 | orchestrator | 2026-04-07 04:16:53.615978 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-07 04:16:53.615985 | orchestrator | Tuesday 07 April 2026 04:15:44 +0000 (0:00:01.982) 0:03:28.547 ********* 2026-04-07 04:16:53.615993 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 04:16:53.616001 | orchestrator | 2026-04-07 04:16:53.616009 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-07 04:16:53.616016 | orchestrator | Tuesday 07 April 2026 04:15:47 +0000 (0:00:03.444) 0:03:31.992 ********* 2026-04-07 04:16:53.616024 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-07 04:16:53.616032 | orchestrator | 2026-04-07 04:16:53.616040 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-07 04:16:53.616047 | orchestrator | Tuesday 07 April 2026 04:15:49 +0000 (0:00:02.135) 0:03:34.128 ********* 2026-04-07 04:16:53.616055 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616063 | orchestrator | 2026-04-07 04:16:53.616071 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-07 04:16:53.616079 | orchestrator | Tuesday 07 April 2026 04:15:51 +0000 (0:00:01.511) 0:03:35.640 ********* 2026-04-07 04:16:53.616087 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616095 | orchestrator | 2026-04-07 04:16:53.616102 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-07 04:16:53.616110 | orchestrator | 2026-04-07 04:16:53.616118 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-07 04:16:53.616125 | orchestrator | Tuesday 07 April 2026 04:15:53 +0000 (0:00:02.308) 0:03:37.948 ********* 2026-04-07 04:16:53.616155 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616164 | orchestrator | 2026-04-07 04:16:53.616172 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-07 04:16:53.616179 | orchestrator | Tuesday 07 April 2026 04:15:54 +0000 (0:00:01.259) 0:03:39.207 ********* 2026-04-07 04:16:53.616187 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 04:16:53.616196 | orchestrator | 2026-04-07 04:16:53.616203 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-07 04:16:53.616211 | orchestrator | Tuesday 07 April 2026 04:15:56 +0000 (0:00:01.669) 0:03:40.877 ********* 2026-04-07 04:16:53.616218 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616226 | orchestrator | 2026-04-07 04:16:53.616234 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-07 04:16:53.616242 | orchestrator | Tuesday 07 April 2026 04:15:58 +0000 (0:00:01.917) 0:03:42.795 ********* 2026-04-07 04:16:53.616249 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616257 | orchestrator | 2026-04-07 04:16:53.616265 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-07 04:16:53.616273 | orchestrator | Tuesday 07 April 2026 04:16:01 +0000 (0:00:03.199) 0:03:45.994 ********* 2026-04-07 04:16:53.616280 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616288 | orchestrator | 2026-04-07 04:16:53.616296 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-07 04:16:53.616303 | orchestrator | Tuesday 07 April 2026 04:16:03 +0000 (0:00:01.519) 0:03:47.514 ********* 2026-04-07 04:16:53.616311 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616319 | orchestrator | 2026-04-07 04:16:53.616326 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-07 04:16:53.616334 | orchestrator | Tuesday 07 April 2026 04:16:04 +0000 (0:00:01.616) 0:03:49.130 ********* 2026-04-07 04:16:53.616342 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616349 | orchestrator | 2026-04-07 04:16:53.616357 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-07 04:16:53.616365 | orchestrator | Tuesday 07 April 2026 04:16:06 +0000 (0:00:01.799) 0:03:50.929 ********* 2026-04-07 04:16:53.616372 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616380 | orchestrator | 2026-04-07 04:16:53.616388 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-07 04:16:53.616395 | orchestrator | Tuesday 07 April 2026 04:16:09 +0000 (0:00:02.767) 0:03:53.697 ********* 2026-04-07 04:16:53.616403 | orchestrator | ok: [testbed-manager] 2026-04-07 04:16:53.616411 | orchestrator | 2026-04-07 04:16:53.616419 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-07 04:16:53.616427 | orchestrator | 2026-04-07 04:16:53.616450 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-07 04:16:53.616458 | orchestrator | Tuesday 07 April 2026 04:16:11 +0000 (0:00:02.227) 0:03:55.924 ********* 2026-04-07 04:16:53.616466 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:16:53.616474 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:16:53.616482 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:16:53.616490 | orchestrator | 2026-04-07 04:16:53.616497 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-07 04:16:53.616505 | orchestrator | Tuesday 07 April 2026 04:16:13 +0000 (0:00:01.665) 0:03:57.590 ********* 2026-04-07 04:16:53.616513 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:16:53.616521 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:16:53.616529 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:16:53.616537 | orchestrator | 2026-04-07 04:16:53.616561 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-07 04:16:53.616569 | orchestrator | Tuesday 07 April 2026 04:16:14 +0000 (0:00:01.388) 0:03:58.978 ********* 2026-04-07 04:16:53.616577 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:16:53.616611 | orchestrator | 2026-04-07 04:16:53.616619 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-07 04:16:53.616627 | orchestrator | Tuesday 07 April 2026 04:16:16 +0000 (0:00:02.134) 0:04:01.113 ********* 2026-04-07 04:16:53.616635 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.616643 | orchestrator | 2026-04-07 04:16:53.616650 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-07 04:16:53.616658 | orchestrator | Tuesday 07 April 2026 04:16:18 +0000 (0:00:02.136) 0:04:03.249 ********* 2026-04-07 04:16:53.616667 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.616681 | orchestrator | 2026-04-07 04:16:53.616688 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-07 04:16:53.616694 | orchestrator | Tuesday 07 April 2026 04:16:21 +0000 (0:00:02.242) 0:04:05.492 ********* 2026-04-07 04:16:53.616701 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:16:53.616707 | orchestrator | 2026-04-07 04:16:53.616714 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-07 04:16:53.616721 | orchestrator | Tuesday 07 April 2026 04:16:22 +0000 (0:00:01.202) 0:04:06.694 ********* 2026-04-07 04:16:53.616728 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.616734 | orchestrator | 2026-04-07 04:16:53.616742 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-07 04:16:53.616749 | orchestrator | Tuesday 07 April 2026 04:16:24 +0000 (0:00:02.236) 0:04:08.931 ********* 2026-04-07 04:16:53.616756 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.616763 | orchestrator | 2026-04-07 04:16:53.616771 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-07 04:16:53.616778 | orchestrator | Tuesday 07 April 2026 04:16:27 +0000 (0:00:02.560) 0:04:11.491 ********* 2026-04-07 04:16:53.616786 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.616794 | orchestrator | 2026-04-07 04:16:53.616802 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-07 04:16:53.616809 | orchestrator | Tuesday 07 April 2026 04:16:28 +0000 (0:00:01.262) 0:04:12.753 ********* 2026-04-07 04:16:53.616817 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.616825 | orchestrator | 2026-04-07 04:16:53.616833 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-07 04:16:53.616841 | orchestrator | Tuesday 07 April 2026 04:16:29 +0000 (0:00:01.355) 0:04:14.109 ********* 2026-04-07 04:16:53.616848 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-04-07 04:16:53.616856 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-04-07 04:16:53.616865 | orchestrator | } 2026-04-07 04:16:53.616873 | orchestrator | 2026-04-07 04:16:53.616882 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-07 04:16:53.616890 | orchestrator | Tuesday 07 April 2026 04:16:31 +0000 (0:00:01.260) 0:04:15.369 ********* 2026-04-07 04:16:53.616897 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:16:53.616905 | orchestrator | 2026-04-07 04:16:53.616912 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-07 04:16:53.616919 | orchestrator | Tuesday 07 April 2026 04:16:32 +0000 (0:00:01.213) 0:04:16.583 ********* 2026-04-07 04:16:53.616927 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-07 04:16:53.616934 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-07 04:16:53.616941 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-07 04:16:53.616948 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-07 04:16:53.616956 | orchestrator | 2026-04-07 04:16:53.616964 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-07 04:16:53.616971 | orchestrator | Tuesday 07 April 2026 04:16:38 +0000 (0:00:06.500) 0:04:23.084 ********* 2026-04-07 04:16:53.616979 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.616992 | orchestrator | 2026-04-07 04:16:53.617000 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-07 04:16:53.617008 | orchestrator | Tuesday 07 April 2026 04:16:41 +0000 (0:00:02.770) 0:04:25.854 ********* 2026-04-07 04:16:53.617015 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.617023 | orchestrator | 2026-04-07 04:16:53.617030 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-07 04:16:53.617037 | orchestrator | Tuesday 07 April 2026 04:16:44 +0000 (0:00:03.015) 0:04:28.870 ********* 2026-04-07 04:16:53.617045 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-07 04:16:53.617052 | orchestrator | 2026-04-07 04:16:53.617059 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-07 04:16:53.617072 | orchestrator | Tuesday 07 April 2026 04:16:48 +0000 (0:00:04.352) 0:04:33.223 ********* 2026-04-07 04:16:53.617080 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:16:53.617087 | orchestrator | 2026-04-07 04:16:53.617095 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-07 04:16:53.617102 | orchestrator | Tuesday 07 April 2026 04:16:50 +0000 (0:00:01.198) 0:04:34.422 ********* 2026-04-07 04:16:53.617109 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-07 04:16:53.617117 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-07 04:16:53.617124 | orchestrator | 2026-04-07 04:16:53.617132 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-07 04:16:53.617140 | orchestrator | Tuesday 07 April 2026 04:16:53 +0000 (0:00:03.315) 0:04:37.737 ********* 2026-04-07 04:16:53.617147 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:16:53.617163 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:17:25.172541 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:17:25.172622 | orchestrator | 2026-04-07 04:17:25.172630 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-07 04:17:25.172637 | orchestrator | Tuesday 07 April 2026 04:16:54 +0000 (0:00:01.521) 0:04:39.259 ********* 2026-04-07 04:17:25.172641 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:17:25.172647 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:17:25.172651 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:17:25.172655 | orchestrator | 2026-04-07 04:17:25.172660 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-07 04:17:25.172664 | orchestrator | 2026-04-07 04:17:25.172669 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-07 04:17:25.172673 | orchestrator | Tuesday 07 April 2026 04:16:57 +0000 (0:00:02.512) 0:04:41.772 ********* 2026-04-07 04:17:25.172677 | orchestrator | ok: [testbed-manager] 2026-04-07 04:17:25.172681 | orchestrator | 2026-04-07 04:17:25.172686 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-07 04:17:25.172690 | orchestrator | Tuesday 07 April 2026 04:16:58 +0000 (0:00:01.259) 0:04:43.031 ********* 2026-04-07 04:17:25.172695 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-07 04:17:25.172699 | orchestrator | 2026-04-07 04:17:25.172703 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-07 04:17:25.172708 | orchestrator | Tuesday 07 April 2026 04:17:00 +0000 (0:00:01.594) 0:04:44.626 ********* 2026-04-07 04:17:25.172712 | orchestrator | ok: [testbed-manager] 2026-04-07 04:17:25.172716 | orchestrator | 2026-04-07 04:17:25.172720 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-07 04:17:25.172724 | orchestrator | 2026-04-07 04:17:25.172728 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-07 04:17:25.172733 | orchestrator | Tuesday 07 April 2026 04:17:05 +0000 (0:00:05.643) 0:04:50.270 ********* 2026-04-07 04:17:25.172737 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:17:25.172741 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:17:25.172745 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:17:25.172749 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:17:25.172767 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:17:25.172771 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:17:25.172775 | orchestrator | 2026-04-07 04:17:25.172780 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-07 04:17:25.172784 | orchestrator | Tuesday 07 April 2026 04:17:07 +0000 (0:00:01.930) 0:04:52.200 ********* 2026-04-07 04:17:25.172788 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 04:17:25.172792 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 04:17:25.172796 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 04:17:25.172800 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-07 04:17:25.172804 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 04:17:25.172854 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 04:17:25.172859 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 04:17:25.172863 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-07 04:17:25.172867 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 04:17:25.172871 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-07 04:17:25.172875 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 04:17:25.172879 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 04:17:25.172884 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 04:17:25.172888 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-07 04:17:25.172892 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 04:17:25.172896 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-07 04:17:25.172900 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 04:17:25.172904 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 04:17:25.172909 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-07 04:17:25.172913 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 04:17:25.172917 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 04:17:25.172922 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-07 04:17:25.172926 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 04:17:25.172930 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 04:17:25.172934 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-07 04:17:25.172938 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 04:17:25.172952 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 04:17:25.172956 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-07 04:17:25.172960 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 04:17:25.172965 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-07 04:17:25.172969 | orchestrator | 2026-04-07 04:17:25.172973 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-07 04:17:25.172984 | orchestrator | Tuesday 07 April 2026 04:17:20 +0000 (0:00:12.501) 0:05:04.702 ********* 2026-04-07 04:17:25.172988 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:17:25.172993 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:17:25.172997 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:17:25.173001 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:17:25.173005 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:17:25.173020 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:17:25.173025 | orchestrator | 2026-04-07 04:17:25.173029 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-07 04:17:25.173033 | orchestrator | Tuesday 07 April 2026 04:17:22 +0000 (0:00:01.925) 0:05:06.628 ********* 2026-04-07 04:17:25.173037 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:17:25.173041 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:17:25.173045 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:17:25.173049 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:17:25.173053 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:17:25.173058 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:17:25.173062 | orchestrator | 2026-04-07 04:17:25.173066 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:17:25.173070 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 04:17:25.173076 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 04:17:25.173080 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 04:17:25.173084 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-07 04:17:25.173088 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 04:17:25.173093 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 04:17:25.173097 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-07 04:17:25.173101 | orchestrator | 2026-04-07 04:17:25.173106 | orchestrator | 2026-04-07 04:17:25.173111 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:17:25.173116 | orchestrator | Tuesday 07 April 2026 04:17:25 +0000 (0:00:02.887) 0:05:09.515 ********* 2026-04-07 04:17:25.173121 | orchestrator | =============================================================================== 2026-04-07 04:17:25.173125 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.60s 2026-04-07 04:17:25.173130 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.34s 2026-04-07 04:17:25.173135 | orchestrator | Manage labels ---------------------------------------------------------- 12.50s 2026-04-07 04:17:25.173140 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.17s 2026-04-07 04:17:25.173145 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 6.50s 2026-04-07 04:17:25.173150 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.64s 2026-04-07 04:17:25.173154 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.73s 2026-04-07 04:17:25.173159 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.51s 2026-04-07 04:17:25.173164 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.35s 2026-04-07 04:17:25.173175 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 3.69s 2026-04-07 04:17:25.173180 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.54s 2026-04-07 04:17:25.173185 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.44s 2026-04-07 04:17:25.173190 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.32s 2026-04-07 04:17:25.173194 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 3.20s 2026-04-07 04:17:25.173199 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.06s 2026-04-07 04:17:25.173204 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 3.02s 2026-04-07 04:17:25.173208 | orchestrator | Manage taints ----------------------------------------------------------- 2.89s 2026-04-07 04:17:25.173213 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.80s 2026-04-07 04:17:25.173221 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.78s 2026-04-07 04:17:25.603653 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.77s 2026-04-07 04:17:25.872255 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-07 04:17:25.872352 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-04-07 04:17:25.880556 | orchestrator | + set -e 2026-04-07 04:17:25.880637 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 04:17:25.880648 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 04:17:25.880659 | orchestrator | ++ INTERACTIVE=false 2026-04-07 04:17:25.880667 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 04:17:25.880676 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 04:17:25.880684 | orchestrator | + osism apply openstackclient 2026-04-07 04:17:37.498007 | orchestrator | 2026-04-07 04:17:37 | INFO  | Prepare task for execution of openstackclient. 2026-04-07 04:17:37.609001 | orchestrator | 2026-04-07 04:17:37 | INFO  | Task 277c1b4f-fef0-4a48-a81a-f63c78dee54e (openstackclient) was prepared for execution. 2026-04-07 04:17:37.609134 | orchestrator | 2026-04-07 04:17:37 | INFO  | It takes a moment until task 277c1b4f-fef0-4a48-a81a-f63c78dee54e (openstackclient) has been started and output is visible here. 2026-04-07 04:18:07.040963 | orchestrator | 2026-04-07 04:18:07.041161 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-07 04:18:07.041184 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-07 04:18:07.041194 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-07 04:18:07.041213 | orchestrator | 2026-04-07 04:18:07.041222 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-07 04:18:07.041230 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-07 04:18:07.041239 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-07 04:18:07.041256 | orchestrator | Tuesday 07 April 2026 04:17:43 +0000 (0:00:01.736) 0:00:01.736 ********* 2026-04-07 04:18:07.041266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-07 04:18:07.041276 | orchestrator | 2026-04-07 04:18:07.041285 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-07 04:18:07.041293 | orchestrator | Tuesday 07 April 2026 04:17:44 +0000 (0:00:01.191) 0:00:02.928 ********* 2026-04-07 04:18:07.041302 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-07 04:18:07.041311 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-07 04:18:07.041319 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-07 04:18:07.041349 | orchestrator | 2026-04-07 04:18:07.041358 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-07 04:18:07.041366 | orchestrator | Tuesday 07 April 2026 04:17:46 +0000 (0:00:01.810) 0:00:04.738 ********* 2026-04-07 04:18:07.041375 | orchestrator | changed: [testbed-manager] 2026-04-07 04:18:07.041383 | orchestrator | 2026-04-07 04:18:07.041392 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-07 04:18:07.041400 | orchestrator | Tuesday 07 April 2026 04:17:48 +0000 (0:00:01.446) 0:00:06.185 ********* 2026-04-07 04:18:07.041408 | orchestrator | ok: [testbed-manager] 2026-04-07 04:18:07.041418 | orchestrator | 2026-04-07 04:18:07.041426 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-07 04:18:07.041434 | orchestrator | Tuesday 07 April 2026 04:17:49 +0000 (0:00:01.201) 0:00:07.386 ********* 2026-04-07 04:18:07.041443 | orchestrator | ok: [testbed-manager] 2026-04-07 04:18:07.041451 | orchestrator | 2026-04-07 04:18:07.041460 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-07 04:18:07.041468 | orchestrator | Tuesday 07 April 2026 04:17:50 +0000 (0:00:01.136) 0:00:08.522 ********* 2026-04-07 04:18:07.041476 | orchestrator | ok: [testbed-manager] 2026-04-07 04:18:07.041483 | orchestrator | 2026-04-07 04:18:07.041491 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-07 04:18:07.041498 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-07 04:18:07.041505 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-07 04:18:07.041522 | orchestrator | Tuesday 07 April 2026 04:17:51 +0000 (0:00:01.047) 0:00:09.570 ********* 2026-04-07 04:18:07.041529 | orchestrator | changed: [testbed-manager] 2026-04-07 04:18:07.041538 | orchestrator | 2026-04-07 04:18:07.041546 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-07 04:18:07.041554 | orchestrator | Tuesday 07 April 2026 04:18:03 +0000 (0:00:11.833) 0:00:21.403 ********* 2026-04-07 04:18:07.041562 | orchestrator | changed: [testbed-manager] 2026-04-07 04:18:07.041569 | orchestrator | 2026-04-07 04:18:07.041577 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-07 04:18:07.041585 | orchestrator | Tuesday 07 April 2026 04:18:04 +0000 (0:00:01.189) 0:00:22.592 ********* 2026-04-07 04:18:07.041594 | orchestrator | changed: [testbed-manager] 2026-04-07 04:18:07.041602 | orchestrator | 2026-04-07 04:18:07.041611 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-07 04:18:07.041620 | orchestrator | Tuesday 07 April 2026 04:18:05 +0000 (0:00:00.780) 0:00:23.373 ********* 2026-04-07 04:18:07.041627 | orchestrator | ok: [testbed-manager] 2026-04-07 04:18:07.041632 | orchestrator | 2026-04-07 04:18:07.041638 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:18:07.041643 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-07 04:18:07.041649 | orchestrator | 2026-04-07 04:18:07.041654 | orchestrator | 2026-04-07 04:18:07.041659 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:18:07.041664 | orchestrator | Tuesday 07 April 2026 04:18:06 +0000 (0:00:01.311) 0:00:24.685 ********* 2026-04-07 04:18:07.041669 | orchestrator | =============================================================================== 2026-04-07 04:18:07.041673 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.83s 2026-04-07 04:18:07.041678 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.81s 2026-04-07 04:18:07.041683 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.45s 2026-04-07 04:18:07.041688 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.31s 2026-04-07 04:18:07.041693 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.20s 2026-04-07 04:18:07.041705 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.19s 2026-04-07 04:18:07.041725 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.19s 2026-04-07 04:18:07.041730 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.14s 2026-04-07 04:18:07.041735 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.05s 2026-04-07 04:18:07.041739 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.78s 2026-04-07 04:18:07.393866 | orchestrator | + osism apply -a upgrade common 2026-04-07 04:18:09.047509 | orchestrator | 2026-04-07 04:18:09 | INFO  | Prepare task for execution of common. 2026-04-07 04:18:09.129734 | orchestrator | 2026-04-07 04:18:09 | INFO  | Task 54b77134-1f3e-4f08-ad10-c602ae96fa37 (common) was prepared for execution. 2026-04-07 04:18:09.129825 | orchestrator | 2026-04-07 04:18:09 | INFO  | It takes a moment until task 54b77134-1f3e-4f08-ad10-c602ae96fa37 (common) has been started and output is visible here. 2026-04-07 04:18:31.849785 | orchestrator | 2026-04-07 04:18:31.849890 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-07 04:18:31.849909 | orchestrator | 2026-04-07 04:18:31.849920 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-07 04:18:31.849930 | orchestrator | Tuesday 07 April 2026 04:18:16 +0000 (0:00:02.974) 0:00:02.974 ********* 2026-04-07 04:18:31.849941 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 04:18:31.849953 | orchestrator | 2026-04-07 04:18:31.849965 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-07 04:18:31.849975 | orchestrator | Tuesday 07 April 2026 04:18:21 +0000 (0:00:05.015) 0:00:07.990 ********* 2026-04-07 04:18:31.849986 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 04:18:31.849997 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 04:18:31.850008 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 04:18:31.850073 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 04:18:31.850087 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 04:18:31.850099 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 04:18:31.850111 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 04:18:31.850122 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 04:18:31.850133 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 04:18:31.850145 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 04:18:31.850177 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 04:18:31.850189 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-07 04:18:31.850200 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 04:18:31.850211 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 04:18:31.850226 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 04:18:31.850262 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 04:18:31.850272 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 04:18:31.850282 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 04:18:31.850292 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-07 04:18:31.850329 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 04:18:31.850343 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-07 04:18:31.850356 | orchestrator | 2026-04-07 04:18:31.850369 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-07 04:18:31.850381 | orchestrator | Tuesday 07 April 2026 04:18:26 +0000 (0:00:04.850) 0:00:12.841 ********* 2026-04-07 04:18:31.850393 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 04:18:31.850405 | orchestrator | 2026-04-07 04:18:31.850416 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-07 04:18:31.850426 | orchestrator | Tuesday 07 April 2026 04:18:29 +0000 (0:00:03.173) 0:00:16.015 ********* 2026-04-07 04:18:31.850441 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:31.850459 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:31.850492 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:31.850504 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:31.850515 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:31.850531 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:31.850551 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:31.850561 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:31.850572 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:31.850589 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:35.589866 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.589986 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590072 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590116 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590129 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590141 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590152 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590182 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590194 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590447 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590485 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:35.590497 | orchestrator | 2026-04-07 04:18:35.590511 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-07 04:18:35.590524 | orchestrator | Tuesday 07 April 2026 04:18:34 +0000 (0:00:05.460) 0:00:21.475 ********* 2026-04-07 04:18:35.590537 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:35.590565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:35.590578 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:35.590590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:35.590624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:36.717907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:36.718097 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:18:36.718104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:36.718110 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:18:36.718115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718121 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:18:36.718127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:36.718166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718182 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:18:36.718188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718193 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:18:36.718198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:36.718203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:36.718209 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:18:36.718218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494436 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:18:39.494460 | orchestrator | 2026-04-07 04:18:39.494475 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-07 04:18:39.494489 | orchestrator | Tuesday 07 April 2026 04:18:38 +0000 (0:00:03.163) 0:00:24.639 ********* 2026-04-07 04:18:39.494503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:39.494574 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:39.494592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:39.494659 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:18:39.494673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:39.494740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:39.494753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:39.494802 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:39.494832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:53.677494 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:18:53.677584 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:18:53.677598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:53.677623 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:18:53.677631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:53.677647 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:18:53.677654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:53.677663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:18:53.677671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:53.677697 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:18:53.677704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:53.677711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:18:53.677718 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:18:53.677725 | orchestrator | 2026-04-07 04:18:53.677745 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-07 04:18:53.677753 | orchestrator | Tuesday 07 April 2026 04:18:41 +0000 (0:00:03.838) 0:00:28.478 ********* 2026-04-07 04:18:53.677760 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:18:53.677767 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:18:53.677774 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:18:53.677781 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:18:53.677788 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:18:53.677795 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:18:53.677802 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:18:53.677808 | orchestrator | 2026-04-07 04:18:53.677814 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-07 04:18:53.677820 | orchestrator | Tuesday 07 April 2026 04:18:44 +0000 (0:00:02.165) 0:00:30.644 ********* 2026-04-07 04:18:53.677827 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:18:53.677833 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:18:53.677839 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:18:53.677845 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:18:53.677851 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:18:53.677858 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:18:53.677865 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:18:53.677871 | orchestrator | 2026-04-07 04:18:53.677878 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-07 04:18:53.677884 | orchestrator | Tuesday 07 April 2026 04:18:46 +0000 (0:00:02.292) 0:00:32.936 ********* 2026-04-07 04:18:53.677890 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:18:53.677897 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:18:53.677903 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:18:53.677911 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:18:53.677918 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:18:53.677924 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:18:53.677930 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:18:53.677936 | orchestrator | 2026-04-07 04:18:53.677942 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-07 04:18:53.677949 | orchestrator | Tuesday 07 April 2026 04:18:48 +0000 (0:00:02.496) 0:00:35.432 ********* 2026-04-07 04:18:53.677956 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:18:53.677962 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:18:53.677969 | orchestrator | changed: [testbed-manager] 2026-04-07 04:18:53.677976 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:18:53.677982 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:18:53.677996 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:18:53.678003 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:18:53.678010 | orchestrator | 2026-04-07 04:18:53.678073 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-07 04:18:53.678087 | orchestrator | Tuesday 07 April 2026 04:18:51 +0000 (0:00:03.027) 0:00:38.460 ********* 2026-04-07 04:18:53.678095 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:53.678104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:53.678109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:53.678114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:53.678136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:55.675018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:55.675097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:18:55.675121 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675188 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675211 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:18:55.675229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:18.335238 | orchestrator | 2026-04-07 04:19:18.335357 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-07 04:19:18.335377 | orchestrator | Tuesday 07 April 2026 04:18:57 +0000 (0:00:05.187) 0:00:43.647 ********* 2026-04-07 04:19:18.335389 | orchestrator | [WARNING]: Skipped 2026-04-07 04:19:18.335402 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-07 04:19:18.335414 | orchestrator | to this access issue: 2026-04-07 04:19:18.335426 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-07 04:19:18.335438 | orchestrator | directory 2026-04-07 04:19:18.335449 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 04:19:18.335461 | orchestrator | 2026-04-07 04:19:18.335473 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-07 04:19:18.335484 | orchestrator | Tuesday 07 April 2026 04:18:59 +0000 (0:00:02.696) 0:00:46.343 ********* 2026-04-07 04:19:18.335546 | orchestrator | [WARNING]: Skipped 2026-04-07 04:19:18.335561 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-07 04:19:18.335573 | orchestrator | to this access issue: 2026-04-07 04:19:18.335584 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-07 04:19:18.335595 | orchestrator | directory 2026-04-07 04:19:18.335608 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 04:19:18.335628 | orchestrator | 2026-04-07 04:19:18.335646 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-07 04:19:18.335665 | orchestrator | Tuesday 07 April 2026 04:19:01 +0000 (0:00:02.133) 0:00:48.477 ********* 2026-04-07 04:19:18.335684 | orchestrator | [WARNING]: Skipped 2026-04-07 04:19:18.335705 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-07 04:19:18.335727 | orchestrator | to this access issue: 2026-04-07 04:19:18.335749 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-07 04:19:18.335769 | orchestrator | directory 2026-04-07 04:19:18.335782 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 04:19:18.335795 | orchestrator | 2026-04-07 04:19:18.335807 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-07 04:19:18.335821 | orchestrator | Tuesday 07 April 2026 04:19:04 +0000 (0:00:02.308) 0:00:50.786 ********* 2026-04-07 04:19:18.335834 | orchestrator | [WARNING]: Skipped 2026-04-07 04:19:18.335847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-07 04:19:18.335859 | orchestrator | to this access issue: 2026-04-07 04:19:18.335872 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-07 04:19:18.335884 | orchestrator | directory 2026-04-07 04:19:18.335898 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-07 04:19:18.335910 | orchestrator | 2026-04-07 04:19:18.335923 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-07 04:19:18.335936 | orchestrator | Tuesday 07 April 2026 04:19:06 +0000 (0:00:02.103) 0:00:52.889 ********* 2026-04-07 04:19:18.335949 | orchestrator | changed: [testbed-manager] 2026-04-07 04:19:18.335962 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:19:18.335974 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:19:18.335988 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:19:18.336000 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:19:18.336019 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:19:18.336038 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:19:18.336092 | orchestrator | 2026-04-07 04:19:18.336113 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-07 04:19:18.336132 | orchestrator | Tuesday 07 April 2026 04:19:10 +0000 (0:00:04.136) 0:00:57.026 ********* 2026-04-07 04:19:18.336151 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 04:19:18.336233 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 04:19:18.336253 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 04:19:18.336270 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 04:19:18.336287 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 04:19:18.336304 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 04:19:18.336320 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-07 04:19:18.336337 | orchestrator | 2026-04-07 04:19:18.336355 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-07 04:19:18.336374 | orchestrator | Tuesday 07 April 2026 04:19:13 +0000 (0:00:03.383) 0:01:00.410 ********* 2026-04-07 04:19:18.336393 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:19:18.336412 | orchestrator | ok: [testbed-manager] 2026-04-07 04:19:18.336430 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:19:18.336448 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:19:18.336465 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:19:18.336483 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:19:18.336529 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:19:18.336547 | orchestrator | 2026-04-07 04:19:18.336565 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-07 04:19:18.336583 | orchestrator | Tuesday 07 April 2026 04:19:17 +0000 (0:00:03.654) 0:01:04.064 ********* 2026-04-07 04:19:18.336665 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:18.336693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:18.336713 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:18.336732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:18.336767 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:18.336787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:18.336806 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:18.336852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:26.195438 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:26.195622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:26.195644 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:26.195676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:26.195688 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:26.195699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:26.195709 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:26.195734 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:26.195765 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:26.195776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:26.195783 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:26.195796 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:26.195803 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:26.195810 | orchestrator | 2026-04-07 04:19:26.195817 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-07 04:19:26.195825 | orchestrator | Tuesday 07 April 2026 04:19:20 +0000 (0:00:03.034) 0:01:07.099 ********* 2026-04-07 04:19:26.195831 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 04:19:26.195838 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 04:19:26.195844 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 04:19:26.195850 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 04:19:26.195860 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 04:19:26.195873 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 04:19:26.195892 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-07 04:19:26.195902 | orchestrator | 2026-04-07 04:19:26.195912 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-07 04:19:26.195922 | orchestrator | Tuesday 07 April 2026 04:19:24 +0000 (0:00:03.542) 0:01:10.642 ********* 2026-04-07 04:19:26.195931 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 04:19:26.195939 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 04:19:26.195953 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 04:19:26.195962 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 04:19:26.195970 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 04:19:26.195980 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 04:19:26.195990 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-07 04:19:26.195999 | orchestrator | 2026-04-07 04:19:26.196018 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-07 04:19:29.670268 | orchestrator | Tuesday 07 April 2026 04:19:27 +0000 (0:00:03.511) 0:01:14.154 ********* 2026-04-07 04:19:29.670438 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:29.670509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:29.670527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:29.670539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:29.670550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:29.670608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:29.670649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-07 04:19:29.670691 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:29.670716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:29.670727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:29.670739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:29.670753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:29.670766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:29.670788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:34.700447 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:34.700526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:34.700533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:34.700538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:34.700542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:34.700558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:34.700562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:19:34.700566 | orchestrator | 2026-04-07 04:19:34.700571 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-07 04:19:34.700576 | orchestrator | Tuesday 07 April 2026 04:19:31 +0000 (0:00:04.454) 0:01:18.608 ********* 2026-04-07 04:19:34.700581 | orchestrator | changed: [testbed-manager] => { 2026-04-07 04:19:34.700633 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:19:34.700637 | orchestrator | } 2026-04-07 04:19:34.700641 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:19:34.700645 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:19:34.700649 | orchestrator | } 2026-04-07 04:19:34.700653 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:19:34.700674 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:19:34.700679 | orchestrator | } 2026-04-07 04:19:34.700682 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:19:34.700686 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:19:34.700690 | orchestrator | } 2026-04-07 04:19:34.700694 | orchestrator | changed: [testbed-node-3] => { 2026-04-07 04:19:34.700698 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:19:34.700701 | orchestrator | } 2026-04-07 04:19:34.700705 | orchestrator | changed: [testbed-node-4] => { 2026-04-07 04:19:34.700709 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:19:34.700713 | orchestrator | } 2026-04-07 04:19:34.700716 | orchestrator | changed: [testbed-node-5] => { 2026-04-07 04:19:34.700720 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:19:34.700724 | orchestrator | } 2026-04-07 04:19:34.700728 | orchestrator | 2026-04-07 04:19:34.700732 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:19:34.700746 | orchestrator | Tuesday 07 April 2026 04:19:34 +0000 (0:00:02.289) 0:01:20.898 ********* 2026-04-07 04:19:34.700752 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:19:34.700758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:34.700762 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:34.700767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:19:34.700771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:34.700775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:34.700783 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:19:34.700790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:19:34.700800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252182 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:19:40.252194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:19:40.252203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252216 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:19:40.252221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:19:40.252254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252266 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:19:40.252271 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:19:40.252289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:19:40.252295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252306 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:19:40.252311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-07 04:19:40.252321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:19:40.252332 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:19:40.252338 | orchestrator | 2026-04-07 04:19:40.252347 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 04:19:40.252354 | orchestrator | Tuesday 07 April 2026 04:19:37 +0000 (0:00:03.502) 0:01:24.401 ********* 2026-04-07 04:19:40.252359 | orchestrator | 2026-04-07 04:19:40.252365 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 04:19:40.252370 | orchestrator | Tuesday 07 April 2026 04:19:38 +0000 (0:00:00.482) 0:01:24.883 ********* 2026-04-07 04:19:40.252375 | orchestrator | 2026-04-07 04:19:40.252380 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 04:19:40.252395 | orchestrator | Tuesday 07 April 2026 04:19:38 +0000 (0:00:00.467) 0:01:25.351 ********* 2026-04-07 04:19:40.252401 | orchestrator | 2026-04-07 04:19:40.252406 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 04:19:40.252411 | orchestrator | Tuesday 07 April 2026 04:19:39 +0000 (0:00:00.467) 0:01:25.818 ********* 2026-04-07 04:19:40.252416 | orchestrator | 2026-04-07 04:19:40.252421 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 04:19:40.252428 | orchestrator | Tuesday 07 April 2026 04:19:39 +0000 (0:00:00.500) 0:01:26.318 ********* 2026-04-07 04:19:40.252436 | orchestrator | 2026-04-07 04:19:40.252444 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 04:19:40.252452 | orchestrator | Tuesday 07 April 2026 04:19:40 +0000 (0:00:00.449) 0:01:26.767 ********* 2026-04-07 04:19:40.252460 | orchestrator | 2026-04-07 04:19:40.252473 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-07 04:21:18.408602 | orchestrator | Tuesday 07 April 2026 04:19:40 +0000 (0:00:00.497) 0:01:27.265 ********* 2026-04-07 04:21:18.408697 | orchestrator | 2026-04-07 04:21:18.408710 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-07 04:21:18.408719 | orchestrator | Tuesday 07 April 2026 04:19:41 +0000 (0:00:00.899) 0:01:28.164 ********* 2026-04-07 04:21:18.408727 | orchestrator | changed: [testbed-manager] 2026-04-07 04:21:18.408736 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:21:18.408744 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:21:18.408751 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:21:18.408759 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:21:18.408766 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:21:18.408773 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:21:18.408780 | orchestrator | 2026-04-07 04:21:18.408788 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-07 04:21:18.408796 | orchestrator | Tuesday 07 April 2026 04:20:23 +0000 (0:00:41.743) 0:02:09.907 ********* 2026-04-07 04:21:18.408803 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:21:18.408810 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:21:18.408838 | orchestrator | changed: [testbed-manager] 2026-04-07 04:21:18.408846 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:21:18.408853 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:21:18.408860 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:21:18.408867 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:21:18.408874 | orchestrator | 2026-04-07 04:21:18.408882 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-07 04:21:18.408889 | orchestrator | Tuesday 07 April 2026 04:21:02 +0000 (0:00:38.893) 0:02:48.801 ********* 2026-04-07 04:21:18.408897 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:21:18.408904 | orchestrator | ok: [testbed-manager] 2026-04-07 04:21:18.408911 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:21:18.408918 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:21:18.408925 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:21:18.408932 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:21:18.408940 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:21:18.408947 | orchestrator | 2026-04-07 04:21:18.408954 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-07 04:21:18.408961 | orchestrator | Tuesday 07 April 2026 04:21:05 +0000 (0:00:02.994) 0:02:51.796 ********* 2026-04-07 04:21:18.408968 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:21:18.408975 | orchestrator | changed: [testbed-manager] 2026-04-07 04:21:18.408983 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:21:18.408990 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:21:18.408997 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:21:18.409004 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:21:18.409011 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:21:18.409018 | orchestrator | 2026-04-07 04:21:18.409026 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:21:18.409037 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:21:18.409051 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:21:18.409063 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:21:18.409075 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:21:18.409087 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:21:18.409182 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:21:18.409197 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:21:18.409209 | orchestrator | 2026-04-07 04:21:18.409223 | orchestrator | 2026-04-07 04:21:18.409237 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:21:18.409266 | orchestrator | Tuesday 07 April 2026 04:21:17 +0000 (0:00:12.670) 0:03:04.466 ********* 2026-04-07 04:21:18.409278 | orchestrator | =============================================================================== 2026-04-07 04:21:18.409287 | orchestrator | common : Restart fluentd container ------------------------------------- 41.74s 2026-04-07 04:21:18.409295 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 38.89s 2026-04-07 04:21:18.409304 | orchestrator | common : Restart cron container ---------------------------------------- 12.67s 2026-04-07 04:21:18.409312 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.46s 2026-04-07 04:21:18.409320 | orchestrator | common : Copying over config.json files for services -------------------- 5.19s 2026-04-07 04:21:18.409338 | orchestrator | common : include_tasks -------------------------------------------------- 5.02s 2026-04-07 04:21:18.409346 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.85s 2026-04-07 04:21:18.409354 | orchestrator | service-check-containers : common | Check containers -------------------- 4.45s 2026-04-07 04:21:18.409363 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.14s 2026-04-07 04:21:18.409371 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.84s 2026-04-07 04:21:18.409380 | orchestrator | common : Flush handlers ------------------------------------------------- 3.76s 2026-04-07 04:21:18.409403 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.65s 2026-04-07 04:21:18.409412 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.54s 2026-04-07 04:21:18.409420 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.51s 2026-04-07 04:21:18.409429 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.50s 2026-04-07 04:21:18.409437 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.39s 2026-04-07 04:21:18.409446 | orchestrator | common : include_tasks -------------------------------------------------- 3.17s 2026-04-07 04:21:18.409454 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.16s 2026-04-07 04:21:18.409462 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.03s 2026-04-07 04:21:18.409469 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.03s 2026-04-07 04:21:18.638722 | orchestrator | + osism apply -a upgrade loadbalancer 2026-04-07 04:21:20.203864 | orchestrator | 2026-04-07 04:21:20 | INFO  | Prepare task for execution of loadbalancer. 2026-04-07 04:21:20.277905 | orchestrator | 2026-04-07 04:21:20 | INFO  | Task 8f55d740-4002-40c1-b588-1672d2a7f72d (loadbalancer) was prepared for execution. 2026-04-07 04:21:20.277972 | orchestrator | 2026-04-07 04:21:20 | INFO  | It takes a moment until task 8f55d740-4002-40c1-b588-1672d2a7f72d (loadbalancer) has been started and output is visible here. 2026-04-07 04:21:55.257019 | orchestrator | 2026-04-07 04:21:55.257116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 04:21:55.257128 | orchestrator | 2026-04-07 04:21:55.257135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 04:21:55.257142 | orchestrator | Tuesday 07 April 2026 04:21:25 +0000 (0:00:01.842) 0:00:01.842 ********* 2026-04-07 04:21:55.257150 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:21:55.257158 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:21:55.257165 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:21:55.257171 | orchestrator | 2026-04-07 04:21:55.257177 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 04:21:55.257184 | orchestrator | Tuesday 07 April 2026 04:21:27 +0000 (0:00:02.126) 0:00:03.968 ********* 2026-04-07 04:21:55.257193 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-07 04:21:55.257200 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-07 04:21:55.257207 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-07 04:21:55.257213 | orchestrator | 2026-04-07 04:21:55.257220 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-07 04:21:55.257226 | orchestrator | 2026-04-07 04:21:55.257233 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-07 04:21:55.257240 | orchestrator | Tuesday 07 April 2026 04:21:31 +0000 (0:00:03.349) 0:00:07.318 ********* 2026-04-07 04:21:55.257290 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:21:55.257297 | orchestrator | 2026-04-07 04:21:55.257302 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-04-07 04:21:55.257307 | orchestrator | Tuesday 07 April 2026 04:21:33 +0000 (0:00:02.243) 0:00:09.562 ********* 2026-04-07 04:21:55.257327 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:21:55.257331 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:21:55.257335 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:21:55.257339 | orchestrator | 2026-04-07 04:21:55.257343 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-04-07 04:21:55.257347 | orchestrator | Tuesday 07 April 2026 04:21:36 +0000 (0:00:02.552) 0:00:12.115 ********* 2026-04-07 04:21:55.257351 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:21:55.257355 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:21:55.257359 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:21:55.257363 | orchestrator | 2026-04-07 04:21:55.257367 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-07 04:21:55.257371 | orchestrator | Tuesday 07 April 2026 04:21:38 +0000 (0:00:02.201) 0:00:14.316 ********* 2026-04-07 04:21:55.257375 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:21:55.257379 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:21:55.257383 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:21:55.257387 | orchestrator | 2026-04-07 04:21:55.257391 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-07 04:21:55.257396 | orchestrator | Tuesday 07 April 2026 04:21:40 +0000 (0:00:01.924) 0:00:16.240 ********* 2026-04-07 04:21:55.257408 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:21:55.257413 | orchestrator | 2026-04-07 04:21:55.257417 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-07 04:21:55.257422 | orchestrator | Tuesday 07 April 2026 04:21:42 +0000 (0:00:01.879) 0:00:18.120 ********* 2026-04-07 04:21:55.257426 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:21:55.257430 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:21:55.257433 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:21:55.257437 | orchestrator | 2026-04-07 04:21:55.257441 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-07 04:21:55.257445 | orchestrator | Tuesday 07 April 2026 04:21:44 +0000 (0:00:01.939) 0:00:20.060 ********* 2026-04-07 04:21:55.257449 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 04:21:55.257453 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 04:21:55.257457 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-07 04:21:55.257461 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 04:21:55.257465 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 04:21:55.257469 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-07 04:21:55.257473 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 04:21:55.257478 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 04:21:55.257482 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-07 04:21:55.257486 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 04:21:55.257490 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 04:21:55.257494 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-07 04:21:55.257498 | orchestrator | 2026-04-07 04:21:55.257502 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-07 04:21:55.257506 | orchestrator | Tuesday 07 April 2026 04:21:47 +0000 (0:00:03.696) 0:00:23.757 ********* 2026-04-07 04:21:55.257510 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-07 04:21:55.257514 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-07 04:21:55.257518 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-07 04:21:55.257530 | orchestrator | 2026-04-07 04:21:55.257534 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-07 04:21:55.257551 | orchestrator | Tuesday 07 April 2026 04:21:49 +0000 (0:00:01.904) 0:00:25.661 ********* 2026-04-07 04:21:55.257555 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-07 04:21:55.257559 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-07 04:21:55.257563 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-07 04:21:55.257568 | orchestrator | 2026-04-07 04:21:55.257573 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-07 04:21:55.257578 | orchestrator | Tuesday 07 April 2026 04:21:51 +0000 (0:00:02.307) 0:00:27.968 ********* 2026-04-07 04:21:55.257582 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-07 04:21:55.257587 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:21:55.257592 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-07 04:21:55.257613 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:21:55.257618 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-07 04:21:55.257628 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:21:55.257633 | orchestrator | 2026-04-07 04:21:55.257638 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-07 04:21:55.257642 | orchestrator | Tuesday 07 April 2026 04:21:54 +0000 (0:00:02.107) 0:00:30.076 ********* 2026-04-07 04:21:55.257648 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 04:21:55.257659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 04:21:55.257665 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 04:21:55.257669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:21:55.257678 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:21:55.257686 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:07.953811 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:07.953952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:07.953998 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:07.954092 | orchestrator | 2026-04-07 04:22:07.954117 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-07 04:22:07.954138 | orchestrator | Tuesday 07 April 2026 04:21:56 +0000 (0:00:02.844) 0:00:32.920 ********* 2026-04-07 04:22:07.954204 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:22:07.954223 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:22:07.954240 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:22:07.954256 | orchestrator | 2026-04-07 04:22:07.954274 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-07 04:22:07.954292 | orchestrator | Tuesday 07 April 2026 04:21:59 +0000 (0:00:02.293) 0:00:35.214 ********* 2026-04-07 04:22:07.954338 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-04-07 04:22:07.954356 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-04-07 04:22:07.954374 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-04-07 04:22:07.954391 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-04-07 04:22:07.954441 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-04-07 04:22:07.954454 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-04-07 04:22:07.954466 | orchestrator | 2026-04-07 04:22:07.954478 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-07 04:22:07.954489 | orchestrator | Tuesday 07 April 2026 04:22:02 +0000 (0:00:03.023) 0:00:38.237 ********* 2026-04-07 04:22:07.954501 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:22:07.954512 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:22:07.954523 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:22:07.954534 | orchestrator | 2026-04-07 04:22:07.954546 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-07 04:22:07.954558 | orchestrator | Tuesday 07 April 2026 04:22:04 +0000 (0:00:02.343) 0:00:40.580 ********* 2026-04-07 04:22:07.954570 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:22:07.954581 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:22:07.954591 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:22:07.954600 | orchestrator | 2026-04-07 04:22:07.954610 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-07 04:22:07.954620 | orchestrator | Tuesday 07 April 2026 04:22:07 +0000 (0:00:02.519) 0:00:43.100 ********* 2026-04-07 04:22:07.954631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 04:22:07.954663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:22:07.954675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:07.954696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 04:22:07.954707 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:22:07.954718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 04:22:07.954736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:22:07.954747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:07.954757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 04:22:07.954768 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:22:07.954785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 04:22:11.734110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:22:11.734228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:11.734264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 04:22:11.734276 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:22:11.734289 | orchestrator | 2026-04-07 04:22:11.734300 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-07 04:22:11.734312 | orchestrator | Tuesday 07 April 2026 04:22:09 +0000 (0:00:02.069) 0:00:45.169 ********* 2026-04-07 04:22:11.734375 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:11.734386 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:11.734397 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:11.734425 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:11.734444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:11.734456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 04:22:11.734473 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:11.734490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:11.734506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 04:22:11.734541 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:25.938964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:25.939093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d', '__omit_place_holder__fb65d9fbeeceef08e639e9a339e910c1f7bacf7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-07 04:22:25.939110 | orchestrator | 2026-04-07 04:22:25.939123 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-07 04:22:25.939135 | orchestrator | Tuesday 07 April 2026 04:22:12 +0000 (0:00:03.868) 0:00:49.038 ********* 2026-04-07 04:22:25.939146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:25.939158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:25.939168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:25.939178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:25.939235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:25.939248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:25.939258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:25.939269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:25.939279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:25.939289 | orchestrator | 2026-04-07 04:22:25.939299 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-07 04:22:25.939309 | orchestrator | Tuesday 07 April 2026 04:22:17 +0000 (0:00:04.586) 0:00:53.624 ********* 2026-04-07 04:22:25.939319 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 04:22:25.939330 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 04:22:25.939340 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-07 04:22:25.939350 | orchestrator | 2026-04-07 04:22:25.939360 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-07 04:22:25.939411 | orchestrator | Tuesday 07 April 2026 04:22:20 +0000 (0:00:03.016) 0:00:56.640 ********* 2026-04-07 04:22:25.939429 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 04:22:25.939447 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 04:22:25.939468 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-07 04:22:25.939485 | orchestrator | 2026-04-07 04:22:25.939498 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-07 04:22:25.939509 | orchestrator | Tuesday 07 April 2026 04:22:25 +0000 (0:00:04.728) 0:01:01.368 ********* 2026-04-07 04:22:25.939521 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:22:25.939534 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:22:25.939552 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:22:48.272244 | orchestrator | 2026-04-07 04:22:48.272321 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-07 04:22:48.272329 | orchestrator | Tuesday 07 April 2026 04:22:27 +0000 (0:00:01.754) 0:01:03.123 ********* 2026-04-07 04:22:48.272334 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 04:22:48.272349 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 04:22:48.272353 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-07 04:22:48.272358 | orchestrator | 2026-04-07 04:22:48.272362 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-07 04:22:48.272366 | orchestrator | Tuesday 07 April 2026 04:22:30 +0000 (0:00:03.430) 0:01:06.553 ********* 2026-04-07 04:22:48.272370 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 04:22:48.272375 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 04:22:48.272380 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-07 04:22:48.272384 | orchestrator | 2026-04-07 04:22:48.272388 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-07 04:22:48.272392 | orchestrator | Tuesday 07 April 2026 04:22:33 +0000 (0:00:03.294) 0:01:09.847 ********* 2026-04-07 04:22:48.272396 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:22:48.272400 | orchestrator | 2026-04-07 04:22:48.272404 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-07 04:22:48.272408 | orchestrator | Tuesday 07 April 2026 04:22:35 +0000 (0:00:02.017) 0:01:11.865 ********* 2026-04-07 04:22:48.272413 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-04-07 04:22:48.272417 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-04-07 04:22:48.272421 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-04-07 04:22:48.272425 | orchestrator | 2026-04-07 04:22:48.272429 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-07 04:22:48.272434 | orchestrator | Tuesday 07 April 2026 04:22:38 +0000 (0:00:02.654) 0:01:14.519 ********* 2026-04-07 04:22:48.272439 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-07 04:22:48.272443 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-07 04:22:48.272447 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-07 04:22:48.272451 | orchestrator | 2026-04-07 04:22:48.272455 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-07 04:22:48.272494 | orchestrator | Tuesday 07 April 2026 04:22:41 +0000 (0:00:02.992) 0:01:17.512 ********* 2026-04-07 04:22:48.272511 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:22:48.272516 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:22:48.272519 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:22:48.272523 | orchestrator | 2026-04-07 04:22:48.272527 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-07 04:22:48.272531 | orchestrator | Tuesday 07 April 2026 04:22:42 +0000 (0:00:01.400) 0:01:18.913 ********* 2026-04-07 04:22:48.272534 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:22:48.272538 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:22:48.272542 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:22:48.272545 | orchestrator | 2026-04-07 04:22:48.272549 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-07 04:22:48.272553 | orchestrator | Tuesday 07 April 2026 04:22:44 +0000 (0:00:02.115) 0:01:21.028 ********* 2026-04-07 04:22:48.272559 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:48.272566 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:48.272583 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 04:22:48.272588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:48.272592 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:48.272602 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:22:48.272607 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:48.272612 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:48.272619 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:22:51.924125 | orchestrator | 2026-04-07 04:22:51.924242 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-07 04:22:51.924263 | orchestrator | Tuesday 07 April 2026 04:22:49 +0000 (0:00:04.484) 0:01:25.513 ********* 2026-04-07 04:22:51.924304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 04:22:51.924325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:22:51.924368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:51.924386 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:22:51.924402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 04:22:51.924419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:22:51.924434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:51.924450 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:22:51.924522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 04:22:51.924541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:22:51.924567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:51.924584 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:22:51.924599 | orchestrator | 2026-04-07 04:22:51.924616 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-07 04:22:51.924633 | orchestrator | Tuesday 07 April 2026 04:22:51 +0000 (0:00:02.017) 0:01:27.530 ********* 2026-04-07 04:22:51.924650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 04:22:51.924666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:22:51.924682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:22:51.924699 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:22:51.924725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 04:23:03.348999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:23:03.349114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:23:03.349124 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:03.349133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 04:23:03.349140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:23:03.349409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:23:03.349417 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:23:03.349424 | orchestrator | 2026-04-07 04:23:03.349437 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-07 04:23:03.349444 | orchestrator | Tuesday 07 April 2026 04:22:53 +0000 (0:00:01.799) 0:01:29.330 ********* 2026-04-07 04:23:03.349450 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 04:23:03.349458 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 04:23:03.349464 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-07 04:23:03.349470 | orchestrator | 2026-04-07 04:23:03.349477 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-07 04:23:03.349483 | orchestrator | Tuesday 07 April 2026 04:22:56 +0000 (0:00:02.983) 0:01:32.313 ********* 2026-04-07 04:23:03.349489 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 04:23:03.349495 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 04:23:03.349501 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-07 04:23:03.349534 | orchestrator | 2026-04-07 04:23:03.349555 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-07 04:23:03.349563 | orchestrator | Tuesday 07 April 2026 04:22:58 +0000 (0:00:02.675) 0:01:34.989 ********* 2026-04-07 04:23:03.349571 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 04:23:03.349578 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 04:23:03.349585 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-07 04:23:03.349591 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 04:23:03.349597 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:23:03.349604 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 04:23:03.349610 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:03.349617 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-07 04:23:03.349624 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:23:03.349630 | orchestrator | 2026-04-07 04:23:03.349637 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-07 04:23:03.349644 | orchestrator | Tuesday 07 April 2026 04:23:01 +0000 (0:00:02.460) 0:01:37.450 ********* 2026-04-07 04:23:03.349651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 04:23:03.349659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 04:23:03.349666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 04:23:03.349676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:23:03.349694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:23:07.911172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:23:07.911304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:23:07.911331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:23:07.911352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:23:07.911368 | orchestrator | 2026-04-07 04:23:07.911381 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-07 04:23:07.911393 | orchestrator | Tuesday 07 April 2026 04:23:05 +0000 (0:00:04.151) 0:01:41.601 ********* 2026-04-07 04:23:07.911404 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:23:07.911415 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:23:07.911426 | orchestrator | } 2026-04-07 04:23:07.911436 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:23:07.911446 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:23:07.911455 | orchestrator | } 2026-04-07 04:23:07.911465 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:23:07.911474 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:23:07.911484 | orchestrator | } 2026-04-07 04:23:07.911494 | orchestrator | 2026-04-07 04:23:07.911504 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:23:07.911613 | orchestrator | Tuesday 07 April 2026 04:23:07 +0000 (0:00:01.709) 0:01:43.311 ********* 2026-04-07 04:23:07.911628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 04:23:07.911657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:23:07.911669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:23:07.911682 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:23:07.911700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 04:23:07.911718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:23:07.911736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:23:07.911762 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:07.911787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 04:23:07.911804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:23:07.911829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:23:15.676017 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:23:15.676115 | orchestrator | 2026-04-07 04:23:15.676126 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-07 04:23:15.676135 | orchestrator | Tuesday 07 April 2026 04:23:09 +0000 (0:00:02.002) 0:01:45.313 ********* 2026-04-07 04:23:15.676142 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:23:15.676149 | orchestrator | 2026-04-07 04:23:15.676156 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-07 04:23:15.676163 | orchestrator | Tuesday 07 April 2026 04:23:11 +0000 (0:00:02.118) 0:01:47.431 ********* 2026-04-07 04:23:15.676173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:23:15.676183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 04:23:15.676222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:15.676231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 04:23:15.676250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:23:15.676258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:23:15.676266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 04:23:15.676278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 04:23:15.676289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:15.676296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:15.676307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 04:23:17.676087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 04:23:17.676185 | orchestrator | 2026-04-07 04:23:17.676201 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-07 04:23:17.676212 | orchestrator | Tuesday 07 April 2026 04:23:16 +0000 (0:00:05.406) 0:01:52.838 ********* 2026-04-07 04:23:17.676225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:23:17.676260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 04:23:17.676287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:17.676298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 04:23:17.676309 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:23:17.676338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:23:17.676350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 04:23:17.676367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:17.676377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 04:23:17.676388 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:17.676398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:23:17.676409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-07 04:23:17.676502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:32.444155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-07 04:23:32.444320 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:23:32.444349 | orchestrator | 2026-04-07 04:23:32.444369 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-07 04:23:32.444388 | orchestrator | Tuesday 07 April 2026 04:23:18 +0000 (0:00:02.084) 0:01:54.922 ********* 2026-04-07 04:23:32.444407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:32.444428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:32.444446 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:23:32.444464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:32.444481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:32.444495 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:32.444520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:32.444530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:32.444541 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:23:32.444551 | orchestrator | 2026-04-07 04:23:32.444566 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-07 04:23:32.444581 | orchestrator | Tuesday 07 April 2026 04:23:21 +0000 (0:00:02.181) 0:01:57.104 ********* 2026-04-07 04:23:32.444598 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:23:32.444614 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:23:32.444663 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:23:32.444682 | orchestrator | 2026-04-07 04:23:32.444700 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-07 04:23:32.444717 | orchestrator | Tuesday 07 April 2026 04:23:23 +0000 (0:00:02.337) 0:01:59.442 ********* 2026-04-07 04:23:32.444735 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:23:32.444752 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:23:32.444769 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:23:32.444788 | orchestrator | 2026-04-07 04:23:32.444806 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-07 04:23:32.444822 | orchestrator | Tuesday 07 April 2026 04:23:26 +0000 (0:00:03.200) 0:02:02.643 ********* 2026-04-07 04:23:32.444834 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:23:32.444845 | orchestrator | 2026-04-07 04:23:32.444858 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-07 04:23:32.444869 | orchestrator | Tuesday 07 April 2026 04:23:28 +0000 (0:00:01.731) 0:02:04.374 ********* 2026-04-07 04:23:32.444908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:23:32.444936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:32.444950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:23:32.444970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:23:32.444984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:32.444996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:23:32.445022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:23:34.657884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:34.658098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:23:34.658136 | orchestrator | 2026-04-07 04:23:34.658159 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-07 04:23:34.658179 | orchestrator | Tuesday 07 April 2026 04:23:33 +0000 (0:00:05.395) 0:02:09.770 ********* 2026-04-07 04:23:34.658199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:23:34.658236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:34.658247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:23:34.658259 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:23:34.658292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:23:34.658311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:34.658322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:23:34.658340 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:34.658351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:23:34.658361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-07 04:23:34.658380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:23:51.948508 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:23:51.948632 | orchestrator | 2026-04-07 04:23:51.948651 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-07 04:23:51.948664 | orchestrator | Tuesday 07 April 2026 04:23:35 +0000 (0:00:02.083) 0:02:11.853 ********* 2026-04-07 04:23:51.948677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:51.948760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:51.948776 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:23:51.948787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:51.948799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:51.948833 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:51.948846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:51.948857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:23:51.948868 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:23:51.948879 | orchestrator | 2026-04-07 04:23:51.948890 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-07 04:23:51.948901 | orchestrator | Tuesday 07 April 2026 04:23:37 +0000 (0:00:01.747) 0:02:13.600 ********* 2026-04-07 04:23:51.948912 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:23:51.948924 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:23:51.948935 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:23:51.948946 | orchestrator | 2026-04-07 04:23:51.948958 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-07 04:23:51.948971 | orchestrator | Tuesday 07 April 2026 04:23:39 +0000 (0:00:02.302) 0:02:15.903 ********* 2026-04-07 04:23:51.948984 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:23:51.948996 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:23:51.949008 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:23:51.949032 | orchestrator | 2026-04-07 04:23:51.949048 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-07 04:23:51.949066 | orchestrator | Tuesday 07 April 2026 04:23:42 +0000 (0:00:03.107) 0:02:19.011 ********* 2026-04-07 04:23:51.949084 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:23:51.949116 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:51.949134 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:23:51.949151 | orchestrator | 2026-04-07 04:23:51.949169 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-07 04:23:51.949186 | orchestrator | Tuesday 07 April 2026 04:23:44 +0000 (0:00:01.707) 0:02:20.719 ********* 2026-04-07 04:23:51.949200 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:23:51.949216 | orchestrator | 2026-04-07 04:23:51.949232 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-07 04:23:51.949249 | orchestrator | Tuesday 07 April 2026 04:23:46 +0000 (0:00:01.587) 0:02:22.307 ********* 2026-04-07 04:23:51.949269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 04:23:51.949328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 04:23:51.949366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-07 04:23:51.949386 | orchestrator | 2026-04-07 04:23:51.949406 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-07 04:23:51.949424 | orchestrator | Tuesday 07 April 2026 04:23:50 +0000 (0:00:04.157) 0:02:26.465 ********* 2026-04-07 04:23:51.949443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 04:23:51.949461 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:23:51.949480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 04:23:51.949499 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:23:51.949526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-07 04:24:05.693490 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:05.693662 | orchestrator | 2026-04-07 04:24:05.693685 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-07 04:24:05.693699 | orchestrator | Tuesday 07 April 2026 04:23:53 +0000 (0:00:02.776) 0:02:29.241 ********* 2026-04-07 04:24:05.693730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 04:24:05.693833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 04:24:05.693848 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:05.693859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 04:24:05.693871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 04:24:05.693883 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:05.693894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 04:24:05.693905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-07 04:24:05.693917 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:05.693928 | orchestrator | 2026-04-07 04:24:05.693940 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-07 04:24:05.693951 | orchestrator | Tuesday 07 April 2026 04:23:56 +0000 (0:00:02.862) 0:02:32.104 ********* 2026-04-07 04:24:05.693962 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:05.693973 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:05.693984 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:05.693995 | orchestrator | 2026-04-07 04:24:05.694006 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-07 04:24:05.694134 | orchestrator | Tuesday 07 April 2026 04:23:57 +0000 (0:00:01.856) 0:02:33.960 ********* 2026-04-07 04:24:05.694159 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:05.694205 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:05.694223 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:05.694239 | orchestrator | 2026-04-07 04:24:05.694257 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-07 04:24:05.694274 | orchestrator | Tuesday 07 April 2026 04:24:00 +0000 (0:00:02.290) 0:02:36.251 ********* 2026-04-07 04:24:05.694291 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:24:05.694310 | orchestrator | 2026-04-07 04:24:05.694328 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-07 04:24:05.694346 | orchestrator | Tuesday 07 April 2026 04:24:01 +0000 (0:00:01.710) 0:02:37.961 ********* 2026-04-07 04:24:05.694399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:24:05.694424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:24:05.694446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 04:24:05.694468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:24:05.694504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 04:24:05.694589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:24:07.648635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648704 | orchestrator | 2026-04-07 04:24:07.648716 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-07 04:24:07.648728 | orchestrator | Tuesday 07 April 2026 04:24:07 +0000 (0:00:05.211) 0:02:43.173 ********* 2026-04-07 04:24:07.648769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:24:07.648788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 04:24:07.648827 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:07.648854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:24:18.469246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:24:18.469362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 04:24:18.469402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 04:24:18.469415 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:18.469428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:24:18.469456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:24:18.469487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-07 04:24:18.469497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-07 04:24:18.469516 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:18.469526 | orchestrator | 2026-04-07 04:24:18.469537 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-07 04:24:18.469549 | orchestrator | Tuesday 07 April 2026 04:24:09 +0000 (0:00:01.998) 0:02:45.172 ********* 2026-04-07 04:24:18.469560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:18.469572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:18.469583 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:18.469591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:18.469598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:18.469604 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:18.469609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:18.469616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:18.469621 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:18.469627 | orchestrator | 2026-04-07 04:24:18.469633 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-07 04:24:18.469644 | orchestrator | Tuesday 07 April 2026 04:24:11 +0000 (0:00:01.951) 0:02:47.124 ********* 2026-04-07 04:24:18.469651 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:24:18.469657 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:24:18.469663 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:24:18.469669 | orchestrator | 2026-04-07 04:24:18.469675 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-07 04:24:18.469680 | orchestrator | Tuesday 07 April 2026 04:24:13 +0000 (0:00:02.601) 0:02:49.726 ********* 2026-04-07 04:24:18.469686 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:24:18.469692 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:24:18.469697 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:24:18.469703 | orchestrator | 2026-04-07 04:24:18.469709 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-07 04:24:18.469715 | orchestrator | Tuesday 07 April 2026 04:24:16 +0000 (0:00:03.031) 0:02:52.757 ********* 2026-04-07 04:24:18.469720 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:18.469726 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:18.469732 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:18.469738 | orchestrator | 2026-04-07 04:24:18.469743 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-07 04:24:18.469749 | orchestrator | Tuesday 07 April 2026 04:24:18 +0000 (0:00:01.469) 0:02:54.226 ********* 2026-04-07 04:24:18.469755 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:18.469766 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:18.469817 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:25.717008 | orchestrator | 2026-04-07 04:24:25.717152 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-07 04:24:25.717184 | orchestrator | Tuesday 07 April 2026 04:24:19 +0000 (0:00:01.513) 0:02:55.740 ********* 2026-04-07 04:24:25.717205 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:24:25.717224 | orchestrator | 2026-04-07 04:24:25.717245 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-07 04:24:25.717265 | orchestrator | Tuesday 07 April 2026 04:24:21 +0000 (0:00:02.044) 0:02:57.785 ********* 2026-04-07 04:24:25.717294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:24:25.717320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 04:24:25.717335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 04:24:25.717365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 04:24:25.717400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:24:25.717441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 04:24:25.717456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:24:25.717469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 04:24:25.717483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 04:24:25.717502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 04:24:25.717525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 04:24:25.717551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 04:24:27.738922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:24:27.739061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 04:24:27.739091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:24:27.739137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 04:24:27.739187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 04:24:27.739236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 04:24:27.739258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 04:24:27.739280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:24:27.739299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 04:24:27.739319 | orchestrator | 2026-04-07 04:24:27.739343 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-07 04:24:27.739364 | orchestrator | Tuesday 07 April 2026 04:24:27 +0000 (0:00:05.428) 0:03:03.214 ********* 2026-04-07 04:24:27.739391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:24:27.739418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 04:24:27.739443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177326 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:28.177337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:24:28.177362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 04:24:28.177406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 04:24:28.177466 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:28.177487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:24:45.359522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-07 04:24:45.359636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-07 04:24:45.359690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-07 04:24:45.359704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-07 04:24:45.359713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:24:45.359723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-07 04:24:45.359733 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:45.359744 | orchestrator | 2026-04-07 04:24:45.359756 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-07 04:24:45.359768 | orchestrator | Tuesday 07 April 2026 04:24:29 +0000 (0:00:02.286) 0:03:05.500 ********* 2026-04-07 04:24:45.359795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:45.359808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:45.359820 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:45.359829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:45.359840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:45.359935 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:45.359947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:45.359956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:24:45.359964 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:45.359973 | orchestrator | 2026-04-07 04:24:45.359982 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-07 04:24:45.359991 | orchestrator | Tuesday 07 April 2026 04:24:31 +0000 (0:00:02.363) 0:03:07.863 ********* 2026-04-07 04:24:45.360000 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:24:45.360010 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:24:45.360018 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:24:45.360027 | orchestrator | 2026-04-07 04:24:45.360037 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-07 04:24:45.360053 | orchestrator | Tuesday 07 April 2026 04:24:34 +0000 (0:00:02.420) 0:03:10.284 ********* 2026-04-07 04:24:45.360062 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:24:45.360070 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:24:45.360079 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:24:45.360088 | orchestrator | 2026-04-07 04:24:45.360097 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-07 04:24:45.360108 | orchestrator | Tuesday 07 April 2026 04:24:37 +0000 (0:00:03.073) 0:03:13.357 ********* 2026-04-07 04:24:45.360118 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:45.360129 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:45.360140 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:24:45.360148 | orchestrator | 2026-04-07 04:24:45.360158 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-07 04:24:45.360168 | orchestrator | Tuesday 07 April 2026 04:24:39 +0000 (0:00:01.723) 0:03:15.081 ********* 2026-04-07 04:24:45.360177 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:24:45.360187 | orchestrator | 2026-04-07 04:24:45.360199 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-07 04:24:45.360208 | orchestrator | Tuesday 07 April 2026 04:24:40 +0000 (0:00:01.699) 0:03:16.780 ********* 2026-04-07 04:24:45.360235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 04:24:45.629563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 04:24:45.629693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 04:24:45.629786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 04:24:45.629810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-07 04:24:45.629840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 04:24:50.248079 | orchestrator | 2026-04-07 04:24:50.248157 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-07 04:24:50.248164 | orchestrator | Tuesday 07 April 2026 04:24:46 +0000 (0:00:06.187) 0:03:22.968 ********* 2026-04-07 04:24:50.248185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 04:24:50.248194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 04:24:50.248213 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:24:50.248232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 04:24:50.248238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 04:24:50.248248 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:24:50.248259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-07 04:25:09.798234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-07 04:25:09.798348 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:09.798361 | orchestrator | 2026-04-07 04:25:09.798368 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-07 04:25:09.798377 | orchestrator | Tuesday 07 April 2026 04:24:51 +0000 (0:00:04.518) 0:03:27.487 ********* 2026-04-07 04:25:09.798385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 04:25:09.798404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 04:25:09.798413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 04:25:09.798419 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:09.798440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 04:25:09.798447 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:09.798454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 04:25:09.798467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-07 04:25:09.798474 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:09.798480 | orchestrator | 2026-04-07 04:25:09.798498 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-07 04:25:09.798505 | orchestrator | Tuesday 07 April 2026 04:24:56 +0000 (0:00:05.301) 0:03:32.788 ********* 2026-04-07 04:25:09.798512 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:25:09.798519 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:25:09.798525 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:25:09.798531 | orchestrator | 2026-04-07 04:25:09.798538 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-07 04:25:09.798544 | orchestrator | Tuesday 07 April 2026 04:24:59 +0000 (0:00:02.632) 0:03:35.420 ********* 2026-04-07 04:25:09.798550 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:25:09.798556 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:25:09.798563 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:25:09.798569 | orchestrator | 2026-04-07 04:25:09.798575 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-07 04:25:09.798581 | orchestrator | Tuesday 07 April 2026 04:25:02 +0000 (0:00:03.172) 0:03:38.593 ********* 2026-04-07 04:25:09.798587 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:09.798594 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:09.798600 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:09.798606 | orchestrator | 2026-04-07 04:25:09.798612 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-07 04:25:09.798619 | orchestrator | Tuesday 07 April 2026 04:25:04 +0000 (0:00:01.487) 0:03:40.080 ********* 2026-04-07 04:25:09.798625 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:25:09.798631 | orchestrator | 2026-04-07 04:25:09.798638 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-07 04:25:09.798644 | orchestrator | Tuesday 07 April 2026 04:25:06 +0000 (0:00:02.325) 0:03:42.406 ********* 2026-04-07 04:25:09.798656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:25:09.798668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:25:28.479083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:25:28.479203 | orchestrator | 2026-04-07 04:25:28.479222 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-07 04:25:28.479236 | orchestrator | Tuesday 07 April 2026 04:25:11 +0000 (0:00:05.525) 0:03:47.931 ********* 2026-04-07 04:25:28.479250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:25:28.479262 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:28.479274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:25:28.479286 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:28.479298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:25:28.479336 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:28.479349 | orchestrator | 2026-04-07 04:25:28.479361 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-07 04:25:28.479372 | orchestrator | Tuesday 07 April 2026 04:25:13 +0000 (0:00:01.539) 0:03:49.471 ********* 2026-04-07 04:25:28.479384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:25:28.479399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:25:28.479412 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:28.479453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:25:28.479466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:25:28.479477 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:28.479522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:25:28.479535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:25:28.479546 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:28.479557 | orchestrator | 2026-04-07 04:25:28.479570 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-07 04:25:28.479583 | orchestrator | Tuesday 07 April 2026 04:25:15 +0000 (0:00:02.008) 0:03:51.480 ********* 2026-04-07 04:25:28.479595 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:25:28.479608 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:25:28.479621 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:25:28.479633 | orchestrator | 2026-04-07 04:25:28.479646 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-07 04:25:28.479659 | orchestrator | Tuesday 07 April 2026 04:25:17 +0000 (0:00:02.523) 0:03:54.003 ********* 2026-04-07 04:25:28.479671 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:25:28.479683 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:25:28.479696 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:25:28.479708 | orchestrator | 2026-04-07 04:25:28.479721 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-07 04:25:28.479734 | orchestrator | Tuesday 07 April 2026 04:25:21 +0000 (0:00:03.056) 0:03:57.060 ********* 2026-04-07 04:25:28.479746 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:28.479760 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:28.479772 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:28.479785 | orchestrator | 2026-04-07 04:25:28.479797 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-07 04:25:28.479810 | orchestrator | Tuesday 07 April 2026 04:25:22 +0000 (0:00:01.464) 0:03:58.524 ********* 2026-04-07 04:25:28.479823 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:25:28.479836 | orchestrator | 2026-04-07 04:25:28.479848 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-07 04:25:28.479861 | orchestrator | Tuesday 07 April 2026 04:25:24 +0000 (0:00:02.065) 0:04:00.590 ********* 2026-04-07 04:25:28.479908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 04:25:30.884420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 04:25:30.884560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-07 04:25:30.884576 | orchestrator | 2026-04-07 04:25:30.884584 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-07 04:25:30.884591 | orchestrator | Tuesday 07 April 2026 04:25:29 +0000 (0:00:05.383) 0:04:05.974 ********* 2026-04-07 04:25:30.884602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 04:25:30.884616 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:30.884646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 04:25:41.929335 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:41.929461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-07 04:25:41.929501 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:41.929513 | orchestrator | 2026-04-07 04:25:41.929524 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-07 04:25:41.929535 | orchestrator | Tuesday 07 April 2026 04:25:32 +0000 (0:00:02.460) 0:04:08.435 ********* 2026-04-07 04:25:41.929554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-07 04:25:41.929568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 04:25:41.929580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-07 04:25:41.929592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 04:25:41.929602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 04:25:41.929613 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:41.929639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-07 04:25:41.929658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 04:25:41.929668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-07 04:25:41.929678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 04:25:41.929688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 04:25:41.929698 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:41.929713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-07 04:25:41.929723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 04:25:41.929733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-07 04:25:41.929743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-07 04:25:41.929753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-07 04:25:41.929763 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:41.929772 | orchestrator | 2026-04-07 04:25:41.929783 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-07 04:25:41.929793 | orchestrator | Tuesday 07 April 2026 04:25:34 +0000 (0:00:02.274) 0:04:10.709 ********* 2026-04-07 04:25:41.929803 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:25:41.929813 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:25:41.929823 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:25:41.929833 | orchestrator | 2026-04-07 04:25:41.929843 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-07 04:25:41.929852 | orchestrator | Tuesday 07 April 2026 04:25:36 +0000 (0:00:02.242) 0:04:12.952 ********* 2026-04-07 04:25:41.929863 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:25:41.929875 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:25:41.929885 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:25:41.929897 | orchestrator | 2026-04-07 04:25:41.929915 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-07 04:25:41.929926 | orchestrator | Tuesday 07 April 2026 04:25:39 +0000 (0:00:03.072) 0:04:16.025 ********* 2026-04-07 04:25:41.929938 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:41.929949 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:41.929959 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:41.929968 | orchestrator | 2026-04-07 04:25:41.929978 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-07 04:25:41.929988 | orchestrator | Tuesday 07 April 2026 04:25:41 +0000 (0:00:01.803) 0:04:17.828 ********* 2026-04-07 04:25:41.930003 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:50.711773 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:50.711905 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:50.711923 | orchestrator | 2026-04-07 04:25:50.711937 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-07 04:25:50.711950 | orchestrator | Tuesday 07 April 2026 04:25:43 +0000 (0:00:01.483) 0:04:19.312 ********* 2026-04-07 04:25:50.711962 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:25:50.711972 | orchestrator | 2026-04-07 04:25:50.711984 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-07 04:25:50.711995 | orchestrator | Tuesday 07 April 2026 04:25:45 +0000 (0:00:02.117) 0:04:21.429 ********* 2026-04-07 04:25:50.712013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-07 04:25:50.712048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 04:25:50.712117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 04:25:50.712131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-07 04:25:50.712185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 04:25:50.712199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 04:25:50.712217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-07 04:25:50.712229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 04:25:50.712241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 04:25:50.712261 | orchestrator | 2026-04-07 04:25:50.712275 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-07 04:25:50.712289 | orchestrator | Tuesday 07 April 2026 04:25:50 +0000 (0:00:04.916) 0:04:26.346 ********* 2026-04-07 04:25:50.712311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-07 04:25:54.258329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 04:25:54.258544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 04:25:54.258565 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:54.258578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-07 04:25:54.258609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 04:25:54.258616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 04:25:54.258621 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:54.258644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-07 04:25:54.258654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-07 04:25:54.258661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-07 04:25:54.258672 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:54.258678 | orchestrator | 2026-04-07 04:25:54.258685 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-07 04:25:54.258692 | orchestrator | Tuesday 07 April 2026 04:25:52 +0000 (0:00:01.956) 0:04:28.302 ********* 2026-04-07 04:25:54.258699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-07 04:25:54.258708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-07 04:25:54.258715 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:25:54.258721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-07 04:25:54.258727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-07 04:25:54.258732 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:25:54.258738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-07 04:25:54.258744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-07 04:25:54.258749 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:25:54.258755 | orchestrator | 2026-04-07 04:25:54.258761 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-07 04:25:54.258770 | orchestrator | Tuesday 07 April 2026 04:25:54 +0000 (0:00:02.004) 0:04:30.307 ********* 2026-04-07 04:26:09.199012 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:26:09.199168 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:26:09.199183 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:26:09.199193 | orchestrator | 2026-04-07 04:26:09.199205 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-07 04:26:09.199216 | orchestrator | Tuesday 07 April 2026 04:25:56 +0000 (0:00:02.314) 0:04:32.621 ********* 2026-04-07 04:26:09.199225 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:26:09.199233 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:26:09.199242 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:26:09.199251 | orchestrator | 2026-04-07 04:26:09.199261 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-07 04:26:09.199270 | orchestrator | Tuesday 07 April 2026 04:25:59 +0000 (0:00:03.035) 0:04:35.657 ********* 2026-04-07 04:26:09.199279 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:26:09.199289 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:26:09.199298 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:26:09.199307 | orchestrator | 2026-04-07 04:26:09.199316 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-07 04:26:09.199325 | orchestrator | Tuesday 07 April 2026 04:26:01 +0000 (0:00:01.623) 0:04:37.280 ********* 2026-04-07 04:26:09.199358 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:26:09.199368 | orchestrator | 2026-04-07 04:26:09.199376 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-07 04:26:09.199398 | orchestrator | Tuesday 07 April 2026 04:26:03 +0000 (0:00:02.333) 0:04:39.614 ********* 2026-04-07 04:26:09.199413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:26:09.199427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:26:09.199438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:26:09.199466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:26:09.199489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:26:09.199499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:26:09.199509 | orchestrator | 2026-04-07 04:26:09.199518 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-07 04:26:09.199528 | orchestrator | Tuesday 07 April 2026 04:26:08 +0000 (0:00:05.239) 0:04:44.853 ********* 2026-04-07 04:26:09.199537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:26:09.199553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:26:25.801807 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:26:25.801917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:26:25.801931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:26:25.801939 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:26:25.801982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:26:25.801992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:26:25.801999 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:26:25.802006 | orchestrator | 2026-04-07 04:26:25.802058 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-07 04:26:25.802069 | orchestrator | Tuesday 07 April 2026 04:26:10 +0000 (0:00:02.106) 0:04:46.959 ********* 2026-04-07 04:26:25.802099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:25.802107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:25.802114 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:26:25.802118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:25.802127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:25.802131 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:26:25.802135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:25.802140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:25.802144 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:26:25.802148 | orchestrator | 2026-04-07 04:26:25.802153 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-07 04:26:25.802157 | orchestrator | Tuesday 07 April 2026 04:26:13 +0000 (0:00:02.439) 0:04:49.399 ********* 2026-04-07 04:26:25.802161 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:26:25.802166 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:26:25.802170 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:26:25.802217 | orchestrator | 2026-04-07 04:26:25.802222 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-07 04:26:25.802226 | orchestrator | Tuesday 07 April 2026 04:26:15 +0000 (0:00:02.341) 0:04:51.741 ********* 2026-04-07 04:26:25.802230 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:26:25.802235 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:26:25.802239 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:26:25.802243 | orchestrator | 2026-04-07 04:26:25.802247 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-07 04:26:25.802251 | orchestrator | Tuesday 07 April 2026 04:26:19 +0000 (0:00:04.121) 0:04:55.862 ********* 2026-04-07 04:26:25.802256 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:26:25.802260 | orchestrator | 2026-04-07 04:26:25.802265 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-07 04:26:25.802269 | orchestrator | Tuesday 07 April 2026 04:26:22 +0000 (0:00:02.311) 0:04:58.173 ********* 2026-04-07 04:26:25.802274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:26:25.802289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:26:27.907467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:26:27.907707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 04:26:27.907745 | orchestrator | 2026-04-07 04:26:27.907757 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-07 04:26:27.907769 | orchestrator | Tuesday 07 April 2026 04:26:27 +0000 (0:00:05.394) 0:05:03.568 ********* 2026-04-07 04:26:27.907782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:26:27.907800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261164 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:26:30.261173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:26:30.261267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261302 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:26:30.261313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:26:30.261319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-07 04:26:30.261340 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:26:30.261346 | orchestrator | 2026-04-07 04:26:30.261352 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-07 04:26:30.261359 | orchestrator | Tuesday 07 April 2026 04:26:29 +0000 (0:00:02.105) 0:05:05.673 ********* 2026-04-07 04:26:30.261366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:30.261375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:30.261382 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:26:30.261387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:30.261397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:47.251041 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:26:47.251140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:47.251154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:26:47.251193 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:26:47.251202 | orchestrator | 2026-04-07 04:26:47.251210 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-07 04:26:47.251220 | orchestrator | Tuesday 07 April 2026 04:26:31 +0000 (0:00:01.997) 0:05:07.670 ********* 2026-04-07 04:26:47.251227 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:26:47.251241 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:26:47.251251 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:26:47.251296 | orchestrator | 2026-04-07 04:26:47.251302 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-07 04:26:47.251324 | orchestrator | Tuesday 07 April 2026 04:26:34 +0000 (0:00:02.401) 0:05:10.071 ********* 2026-04-07 04:26:47.251329 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:26:47.251333 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:26:47.251337 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:26:47.251342 | orchestrator | 2026-04-07 04:26:47.251346 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-07 04:26:47.251350 | orchestrator | Tuesday 07 April 2026 04:26:37 +0000 (0:00:03.066) 0:05:13.138 ********* 2026-04-07 04:26:47.251355 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:26:47.251359 | orchestrator | 2026-04-07 04:26:47.251364 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-07 04:26:47.251368 | orchestrator | Tuesday 07 April 2026 04:26:39 +0000 (0:00:02.731) 0:05:15.870 ********* 2026-04-07 04:26:47.251372 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:26:47.251377 | orchestrator | 2026-04-07 04:26:47.251381 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-07 04:26:47.251385 | orchestrator | Tuesday 07 April 2026 04:26:44 +0000 (0:00:04.616) 0:05:20.487 ********* 2026-04-07 04:26:47.251394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:26:47.251437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 04:26:47.251448 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:26:47.251455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:26:47.251469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 04:26:47.251477 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:26:47.251495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:26:51.340713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 04:26:51.340803 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:26:51.340814 | orchestrator | 2026-04-07 04:26:51.340823 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-07 04:26:51.340831 | orchestrator | Tuesday 07 April 2026 04:26:48 +0000 (0:00:03.982) 0:05:24.470 ********* 2026-04-07 04:26:51.340841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:26:51.340850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 04:26:51.340858 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:26:51.340895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:26:51.340920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 04:26:51.340927 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:26:51.340935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:26:51.340957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-07 04:27:10.005021 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:10.005172 | orchestrator | 2026-04-07 04:27:10.005203 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-07 04:27:10.005226 | orchestrator | Tuesday 07 April 2026 04:26:52 +0000 (0:00:04.152) 0:05:28.622 ********* 2026-04-07 04:27:10.005249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 04:27:10.005277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 04:27:10.005298 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:10.005319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 04:27:10.005368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 04:27:10.005389 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:10.005408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 04:27:10.005483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-07 04:27:10.005506 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:10.005528 | orchestrator | 2026-04-07 04:27:10.005550 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-07 04:27:10.005573 | orchestrator | Tuesday 07 April 2026 04:26:56 +0000 (0:00:04.410) 0:05:33.033 ********* 2026-04-07 04:27:10.005594 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:27:10.005645 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:27:10.005666 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:27:10.005684 | orchestrator | 2026-04-07 04:27:10.005702 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-07 04:27:10.005721 | orchestrator | Tuesday 07 April 2026 04:27:00 +0000 (0:00:03.286) 0:05:36.320 ********* 2026-04-07 04:27:10.005739 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:10.005757 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:10.005777 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:10.005797 | orchestrator | 2026-04-07 04:27:10.005817 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-07 04:27:10.005836 | orchestrator | Tuesday 07 April 2026 04:27:03 +0000 (0:00:03.081) 0:05:39.402 ********* 2026-04-07 04:27:10.005852 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:10.005869 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:10.005884 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:10.005900 | orchestrator | 2026-04-07 04:27:10.005916 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-07 04:27:10.005935 | orchestrator | Tuesday 07 April 2026 04:27:04 +0000 (0:00:01.464) 0:05:40.866 ********* 2026-04-07 04:27:10.005954 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:27:10.005971 | orchestrator | 2026-04-07 04:27:10.005987 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-07 04:27:10.006004 | orchestrator | Tuesday 07 April 2026 04:27:06 +0000 (0:00:02.084) 0:05:42.951 ********* 2026-04-07 04:27:10.006158 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 04:27:10.006184 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 04:27:10.006227 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 04:27:10.006246 | orchestrator | 2026-04-07 04:27:10.006264 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-07 04:27:10.006282 | orchestrator | Tuesday 07 April 2026 04:27:09 +0000 (0:00:02.977) 0:05:45.928 ********* 2026-04-07 04:27:10.006330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 04:27:26.271524 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:26.271634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 04:27:26.271650 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:26.271661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 04:27:26.271675 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:26.271691 | orchestrator | 2026-04-07 04:27:26.271713 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-07 04:27:26.271727 | orchestrator | Tuesday 07 April 2026 04:27:11 +0000 (0:00:01.570) 0:05:47.499 ********* 2026-04-07 04:27:26.271770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 04:27:26.271786 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:26.271799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 04:27:26.271812 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:26.271825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-07 04:27:26.271838 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:26.271850 | orchestrator | 2026-04-07 04:27:26.271864 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-07 04:27:26.271876 | orchestrator | Tuesday 07 April 2026 04:27:13 +0000 (0:00:01.792) 0:05:49.291 ********* 2026-04-07 04:27:26.271889 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:26.271902 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:26.271915 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:26.271927 | orchestrator | 2026-04-07 04:27:26.271941 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-07 04:27:26.271955 | orchestrator | Tuesday 07 April 2026 04:27:15 +0000 (0:00:01.874) 0:05:51.166 ********* 2026-04-07 04:27:26.271968 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:26.271976 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:26.271984 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:26.271992 | orchestrator | 2026-04-07 04:27:26.272013 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-07 04:27:26.272022 | orchestrator | Tuesday 07 April 2026 04:27:17 +0000 (0:00:02.426) 0:05:53.592 ********* 2026-04-07 04:27:26.272029 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:26.272041 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:26.272054 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:26.272073 | orchestrator | 2026-04-07 04:27:26.272087 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-07 04:27:26.272100 | orchestrator | Tuesday 07 April 2026 04:27:19 +0000 (0:00:01.512) 0:05:55.104 ********* 2026-04-07 04:27:26.272112 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:27:26.272124 | orchestrator | 2026-04-07 04:27:26.272138 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-07 04:27:26.272151 | orchestrator | Tuesday 07 April 2026 04:27:21 +0000 (0:00:02.416) 0:05:57.521 ********* 2026-04-07 04:27:26.272191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:27:26.272218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.272229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-07 04:27:26.272277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-07 04:27:26.272307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.435316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:26.435501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:26.435523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 04:27:26.435538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:26.435572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.435620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:27:26.435645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-07 04:27:26.435676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:26.435696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:27:26.435722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.435741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.435775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-07 04:27:26.537067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 04:27:26.537156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.537186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-07 04:27:26.537197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-07 04:27:26.537243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:26.537256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.537266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-07 04:27:26.537280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:26.537291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.537307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:26.537325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 04:27:26.648076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:26.648152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:26.648174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.648182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:26.648189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-07 04:27:26.648217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 04:27:26.648233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:26.648239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:26.648244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.648252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:26.648263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 04:27:26.648269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-07 04:27:26.648278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:29.380522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:29.380614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:29.380644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 04:27:29.380670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:29.380678 | orchestrator | 2026-04-07 04:27:29.380686 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-07 04:27:29.380695 | orchestrator | Tuesday 07 April 2026 04:27:27 +0000 (0:00:06.425) 0:06:03.947 ********* 2026-04-07 04:27:29.380719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:27:29.380728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:29.380740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-07 04:27:29.380753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-07 04:27:29.380761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:29.380769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:29.380783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:29.895154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 04:27:29.895311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:29.895353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:29.895368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-07 04:27:29.895381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:29.895393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:29.895487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 04:27:29.895522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:29.895535 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:29.895550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:27:29.895563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:29.895584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-07 04:27:30.158524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-07 04:27:30.158629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:30.158641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:30.158650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:30.158660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 04:27:30.158682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:30.158697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:30.158703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-07 04:27:30.158707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:30.158711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:30.158716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:27:30.158725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:30.342677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 04:27:30.342764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-07 04:27:30.342778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:30.342789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-07 04:27:30.342818 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:30.342850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:30.342862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:30.342871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:30.342880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-07 04:27:30.342889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:30.342898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:30.342914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-07 04:27:30.342930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-07 04:27:47.003511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-07 04:27:47.003682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-07 04:27:47.003704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-07 04:27:47.003713 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:47.003723 | orchestrator | 2026-04-07 04:27:47.003732 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-07 04:27:47.003757 | orchestrator | Tuesday 07 April 2026 04:27:31 +0000 (0:00:03.771) 0:06:07.718 ********* 2026-04-07 04:27:47.003766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:27:47.003777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:27:47.003786 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:27:47.003794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:27:47.003801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:27:47.003809 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:27:47.003820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:27:47.003843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:27:47.003851 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:27:47.003858 | orchestrator | 2026-04-07 04:27:47.003866 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-07 04:27:47.003874 | orchestrator | Tuesday 07 April 2026 04:27:34 +0000 (0:00:03.045) 0:06:10.763 ********* 2026-04-07 04:27:47.003882 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:27:47.003890 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:27:47.003897 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:27:47.003904 | orchestrator | 2026-04-07 04:27:47.003912 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-07 04:27:47.003919 | orchestrator | Tuesday 07 April 2026 04:27:36 +0000 (0:00:02.284) 0:06:13.048 ********* 2026-04-07 04:27:47.003926 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:27:47.003933 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:27:47.003940 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:27:47.003948 | orchestrator | 2026-04-07 04:27:47.003955 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-07 04:27:47.003962 | orchestrator | Tuesday 07 April 2026 04:27:40 +0000 (0:00:03.028) 0:06:16.077 ********* 2026-04-07 04:27:47.003969 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:27:47.003976 | orchestrator | 2026-04-07 04:27:47.003984 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-07 04:27:47.003991 | orchestrator | Tuesday 07 April 2026 04:27:42 +0000 (0:00:02.473) 0:06:18.550 ********* 2026-04-07 04:27:47.003999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-07 04:27:47.004019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-07 04:27:47.004039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-07 04:28:05.639097 | orchestrator | 2026-04-07 04:28:05.639176 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-07 04:28:05.639183 | orchestrator | Tuesday 07 April 2026 04:27:48 +0000 (0:00:05.710) 0:06:24.260 ********* 2026-04-07 04:28:05.639190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-07 04:28:05.639197 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:28:05.639202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-07 04:28:05.639221 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:28:05.639226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-07 04:28:05.639230 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:28:05.639234 | orchestrator | 2026-04-07 04:28:05.639238 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-07 04:28:05.639242 | orchestrator | Tuesday 07 April 2026 04:27:50 +0000 (0:00:02.543) 0:06:26.804 ********* 2026-04-07 04:28:05.639257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:28:05.639274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:28:05.639280 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:28:05.639284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:28:05.639288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:28:05.639292 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:28:05.639295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:28:05.639304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:28:05.639308 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:28:05.639311 | orchestrator | 2026-04-07 04:28:05.639315 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-07 04:28:05.639319 | orchestrator | Tuesday 07 April 2026 04:27:52 +0000 (0:00:01.911) 0:06:28.716 ********* 2026-04-07 04:28:05.639323 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:28:05.639328 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:28:05.639331 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:28:05.639335 | orchestrator | 2026-04-07 04:28:05.639339 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-07 04:28:05.639342 | orchestrator | Tuesday 07 April 2026 04:27:55 +0000 (0:00:02.441) 0:06:31.157 ********* 2026-04-07 04:28:05.639346 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:28:05.639350 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:28:05.639354 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:28:05.639358 | orchestrator | 2026-04-07 04:28:05.639362 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-07 04:28:05.639365 | orchestrator | Tuesday 07 April 2026 04:27:58 +0000 (0:00:03.321) 0:06:34.479 ********* 2026-04-07 04:28:05.639369 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:28:05.639373 | orchestrator | 2026-04-07 04:28:05.639377 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-07 04:28:05.639381 | orchestrator | Tuesday 07 April 2026 04:28:01 +0000 (0:00:02.794) 0:06:37.273 ********* 2026-04-07 04:28:05.639385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:28:05.639395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:28:09.190824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:28:09.190928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:28:09.190942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:28:09.190951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:28:09.190988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:28:09.191014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:28:09.191022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:28:09.191030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:28:09.191037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:28:09.191049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:28:09.191062 | orchestrator | 2026-04-07 04:28:09.191070 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-07 04:28:09.191084 | orchestrator | Tuesday 07 April 2026 04:28:09 +0000 (0:00:07.966) 0:06:45.239 ********* 2026-04-07 04:28:10.490678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:28:10.490750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:28:10.490758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:28:10.490764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:28:10.490769 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:28:10.490797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:28:10.490815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:28:10.490820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:28:10.490824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:28:10.490828 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:28:10.490835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:28:10.490858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:28:33.761778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-07 04:28:33.761951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-07 04:28:33.761978 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:28:33.761997 | orchestrator | 2026-04-07 04:28:33.762013 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-07 04:28:33.762109 | orchestrator | Tuesday 07 April 2026 04:28:11 +0000 (0:00:02.556) 0:06:47.795 ********* 2026-04-07 04:28:33.762129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762232 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:28:33.762267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762377 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:28:33.762392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:28:33.762420 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:28:33.762434 | orchestrator | 2026-04-07 04:28:33.762449 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-07 04:28:33.762463 | orchestrator | Tuesday 07 April 2026 04:28:14 +0000 (0:00:02.601) 0:06:50.397 ********* 2026-04-07 04:28:33.762477 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:28:33.762491 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:28:33.762504 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:28:33.762517 | orchestrator | 2026-04-07 04:28:33.762531 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-07 04:28:33.762544 | orchestrator | Tuesday 07 April 2026 04:28:16 +0000 (0:00:02.503) 0:06:52.901 ********* 2026-04-07 04:28:33.762558 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:28:33.762570 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:28:33.762584 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:28:33.762596 | orchestrator | 2026-04-07 04:28:33.762610 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-07 04:28:33.762724 | orchestrator | Tuesday 07 April 2026 04:28:20 +0000 (0:00:03.625) 0:06:56.527 ********* 2026-04-07 04:28:33.762741 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:28:33.762754 | orchestrator | 2026-04-07 04:28:33.762780 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-07 04:28:33.762794 | orchestrator | Tuesday 07 April 2026 04:28:23 +0000 (0:00:02.614) 0:06:59.141 ********* 2026-04-07 04:28:33.762807 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-07 04:28:33.762823 | orchestrator | 2026-04-07 04:28:33.762836 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-07 04:28:33.762850 | orchestrator | Tuesday 07 April 2026 04:28:25 +0000 (0:00:02.753) 0:07:01.895 ********* 2026-04-07 04:28:33.762866 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 04:28:33.762892 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 04:28:33.762937 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-07 04:28:33.762951 | orchestrator | 2026-04-07 04:28:33.762976 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-07 04:28:33.762992 | orchestrator | Tuesday 07 April 2026 04:28:32 +0000 (0:00:06.705) 0:07:08.600 ********* 2026-04-07 04:28:33.763006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:28:33.763032 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:02.203658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:29:02.203835 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:02.203846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:29:02.203869 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:02.203874 | orchestrator | 2026-04-07 04:29:02.203879 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-07 04:29:02.203885 | orchestrator | Tuesday 07 April 2026 04:28:35 +0000 (0:00:03.012) 0:07:11.612 ********* 2026-04-07 04:29:02.203890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 04:29:02.203897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 04:29:02.203903 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:02.203907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 04:29:02.203911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 04:29:02.203915 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:02.203919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 04:29:02.203934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-07 04:29:02.203938 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:02.203942 | orchestrator | 2026-04-07 04:29:02.203946 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 04:29:02.203950 | orchestrator | Tuesday 07 April 2026 04:28:38 +0000 (0:00:03.216) 0:07:14.829 ********* 2026-04-07 04:29:02.203954 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:29:02.203959 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:29:02.203963 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:29:02.203967 | orchestrator | 2026-04-07 04:29:02.203971 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 04:29:02.203975 | orchestrator | Tuesday 07 April 2026 04:28:42 +0000 (0:00:03.886) 0:07:18.716 ********* 2026-04-07 04:29:02.203979 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:29:02.203983 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:29:02.203987 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:29:02.203991 | orchestrator | 2026-04-07 04:29:02.203995 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-07 04:29:02.203999 | orchestrator | Tuesday 07 April 2026 04:28:47 +0000 (0:00:04.555) 0:07:23.271 ********* 2026-04-07 04:29:02.204005 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-07 04:29:02.204010 | orchestrator | 2026-04-07 04:29:02.204014 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-07 04:29:02.204019 | orchestrator | Tuesday 07 April 2026 04:28:49 +0000 (0:00:01.936) 0:07:25.208 ********* 2026-04-07 04:29:02.204035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:29:02.204044 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:02.204048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:29:02.204053 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:02.204057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:29:02.204061 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:02.204065 | orchestrator | 2026-04-07 04:29:02.204069 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-07 04:29:02.204074 | orchestrator | Tuesday 07 April 2026 04:28:52 +0000 (0:00:03.114) 0:07:28.323 ********* 2026-04-07 04:29:02.204078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:29:02.204082 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:02.204089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:29:02.204093 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:02.204097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-07 04:29:02.204101 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:02.204112 | orchestrator | 2026-04-07 04:29:02.204116 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-07 04:29:02.204120 | orchestrator | Tuesday 07 April 2026 04:28:55 +0000 (0:00:02.851) 0:07:31.175 ********* 2026-04-07 04:29:02.204128 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:02.204132 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:02.204136 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:02.204140 | orchestrator | 2026-04-07 04:29:02.204144 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 04:29:02.204148 | orchestrator | Tuesday 07 April 2026 04:28:58 +0000 (0:00:03.148) 0:07:34.323 ********* 2026-04-07 04:29:02.204152 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:29:02.204156 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:29:02.204160 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:29:02.204164 | orchestrator | 2026-04-07 04:29:02.204167 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 04:29:02.204171 | orchestrator | Tuesday 07 April 2026 04:29:02 +0000 (0:00:03.920) 0:07:38.244 ********* 2026-04-07 04:29:31.495328 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:29:31.495477 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:29:31.495503 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:29:31.495523 | orchestrator | 2026-04-07 04:29:31.495544 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-07 04:29:31.495564 | orchestrator | Tuesday 07 April 2026 04:29:06 +0000 (0:00:04.456) 0:07:42.701 ********* 2026-04-07 04:29:31.495584 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-07 04:29:31.495604 | orchestrator | 2026-04-07 04:29:31.495625 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-07 04:29:31.495645 | orchestrator | Tuesday 07 April 2026 04:29:08 +0000 (0:00:01.835) 0:07:44.536 ********* 2026-04-07 04:29:31.495670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 04:29:31.495693 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:31.495716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 04:29:31.495736 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:31.495756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 04:29:31.495843 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:31.495864 | orchestrator | 2026-04-07 04:29:31.495885 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-07 04:29:31.495906 | orchestrator | Tuesday 07 April 2026 04:29:11 +0000 (0:00:02.749) 0:07:47.286 ********* 2026-04-07 04:29:31.495948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 04:29:31.496000 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:31.496021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 04:29:31.496040 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:31.496088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-07 04:29:31.496106 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:31.496122 | orchestrator | 2026-04-07 04:29:31.496140 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-07 04:29:31.496159 | orchestrator | Tuesday 07 April 2026 04:29:13 +0000 (0:00:02.768) 0:07:50.054 ********* 2026-04-07 04:29:31.496179 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:31.496197 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:31.496215 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:31.496232 | orchestrator | 2026-04-07 04:29:31.496250 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-07 04:29:31.496268 | orchestrator | Tuesday 07 April 2026 04:29:16 +0000 (0:00:02.816) 0:07:52.870 ********* 2026-04-07 04:29:31.496286 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:29:31.496304 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:29:31.496321 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:29:31.496338 | orchestrator | 2026-04-07 04:29:31.496355 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-07 04:29:31.496373 | orchestrator | Tuesday 07 April 2026 04:29:20 +0000 (0:00:03.975) 0:07:56.846 ********* 2026-04-07 04:29:31.496392 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:29:31.496410 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:29:31.496426 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:29:31.496444 | orchestrator | 2026-04-07 04:29:31.496463 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-07 04:29:31.496479 | orchestrator | Tuesday 07 April 2026 04:29:25 +0000 (0:00:04.558) 0:08:01.405 ********* 2026-04-07 04:29:31.496496 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:29:31.496513 | orchestrator | 2026-04-07 04:29:31.496532 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-07 04:29:31.496549 | orchestrator | Tuesday 07 April 2026 04:29:27 +0000 (0:00:02.351) 0:08:03.757 ********* 2026-04-07 04:29:31.496568 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 04:29:31.496617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 04:29:31.496640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 04:29:31.496676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 04:29:33.725319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:29:33.725411 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 04:29:33.725452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 04:29:33.725481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 04:29:33.725493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 04:29:33.725506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:29:33.725540 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-07 04:29:33.725557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 04:29:33.725579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 04:29:33.725592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 04:29:33.725600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:29:33.725608 | orchestrator | 2026-04-07 04:29:33.725617 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-07 04:29:33.725626 | orchestrator | Tuesday 07 April 2026 04:29:33 +0000 (0:00:05.589) 0:08:09.347 ********* 2026-04-07 04:29:33.725641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 04:29:34.005684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 04:29:34.005762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 04:29:34.005834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 04:29:34.005846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:29:34.005854 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:34.005864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 04:29:34.005874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 04:29:34.005900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 04:29:34.005998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 04:29:34.006078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:29:34.006089 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:34.006102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-07 04:29:34.006112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-07 04:29:34.006129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-07 04:29:51.989254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-07 04:29:51.989394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-07 04:29:51.989413 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:51.989427 | orchestrator | 2026-04-07 04:29:51.989440 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-07 04:29:51.989453 | orchestrator | Tuesday 07 April 2026 04:29:35 +0000 (0:00:01.922) 0:08:11.270 ********* 2026-04-07 04:29:51.989465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 04:29:51.989479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 04:29:51.989492 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:51.989517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 04:29:51.989529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 04:29:51.989543 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:51.989562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 04:29:51.989581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-07 04:29:51.989599 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:51.989616 | orchestrator | 2026-04-07 04:29:51.989634 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-07 04:29:51.989651 | orchestrator | Tuesday 07 April 2026 04:29:37 +0000 (0:00:01.917) 0:08:13.188 ********* 2026-04-07 04:29:51.989670 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:29:51.989690 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:29:51.989709 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:29:51.989729 | orchestrator | 2026-04-07 04:29:51.989747 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-07 04:29:51.989767 | orchestrator | Tuesday 07 April 2026 04:29:39 +0000 (0:00:02.695) 0:08:15.883 ********* 2026-04-07 04:29:51.989787 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:29:51.989803 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:29:51.989851 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:29:51.989864 | orchestrator | 2026-04-07 04:29:51.989877 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-07 04:29:51.989901 | orchestrator | Tuesday 07 April 2026 04:29:43 +0000 (0:00:03.250) 0:08:19.133 ********* 2026-04-07 04:29:51.989914 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:29:51.989929 | orchestrator | 2026-04-07 04:29:51.989948 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-07 04:29:51.989967 | orchestrator | Tuesday 07 April 2026 04:29:45 +0000 (0:00:02.396) 0:08:21.530 ********* 2026-04-07 04:29:51.990083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:29:51.990109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:29:51.990132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:29:51.990149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:29:51.990185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:29:55.686969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:29:55.687116 | orchestrator | 2026-04-07 04:29:55.687163 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-07 04:29:55.687183 | orchestrator | Tuesday 07 April 2026 04:29:53 +0000 (0:00:07.851) 0:08:29.381 ********* 2026-04-07 04:29:55.687202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:29:55.687223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:29:55.687269 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:29:55.687308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:29:55.687326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:29:55.687337 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:29:55.687348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:29:55.687367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:29:55.687383 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:29:55.687400 | orchestrator | 2026-04-07 04:29:55.687446 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-07 04:29:55.687465 | orchestrator | Tuesday 07 April 2026 04:29:55 +0000 (0:00:01.906) 0:08:31.288 ********* 2026-04-07 04:29:55.687484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:29:55.687506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-07 04:30:07.143342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-07 04:30:07.143520 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:07.143550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:30:07.143567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-07 04:30:07.143606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-07 04:30:07.143623 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:07.143640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:30:07.143683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-07 04:30:07.143701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-07 04:30:07.143719 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:07.143736 | orchestrator | 2026-04-07 04:30:07.143754 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-07 04:30:07.143774 | orchestrator | Tuesday 07 April 2026 04:29:57 +0000 (0:00:02.279) 0:08:33.567 ********* 2026-04-07 04:30:07.143790 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:07.143807 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:07.143817 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:07.143827 | orchestrator | 2026-04-07 04:30:07.143837 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-07 04:30:07.143876 | orchestrator | Tuesday 07 April 2026 04:29:59 +0000 (0:00:01.655) 0:08:35.223 ********* 2026-04-07 04:30:07.143896 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:07.143912 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:07.143927 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:07.143939 | orchestrator | 2026-04-07 04:30:07.143951 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-07 04:30:07.143963 | orchestrator | Tuesday 07 April 2026 04:30:01 +0000 (0:00:02.544) 0:08:37.768 ********* 2026-04-07 04:30:07.143974 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:30:07.143985 | orchestrator | 2026-04-07 04:30:07.143997 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-07 04:30:07.144009 | orchestrator | Tuesday 07 April 2026 04:30:04 +0000 (0:00:03.014) 0:08:40.782 ********* 2026-04-07 04:30:07.144047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-07 04:30:07.144063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 04:30:07.144081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:07.144102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:07.144114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-07 04:30:07.144125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 04:30:07.144185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 04:30:07.144211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-07 04:30:09.678178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:09.678290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:09.678311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 04:30:09.678328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 04:30:09.678343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:09.678357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:09.678372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 04:30:09.678447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:30:09.678462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-07 04:30:09.678471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:09.678481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:09.678489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 04:30:09.678509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:30:11.844449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-07 04:30:11.844531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.844541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.844549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 04:30:11.844557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:30:11.844605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-07 04:30:11.844613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.844619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.844625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 04:30:11.844631 | orchestrator | 2026-04-07 04:30:11.844640 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-07 04:30:11.844651 | orchestrator | Tuesday 07 April 2026 04:30:11 +0000 (0:00:06.516) 0:08:47.298 ********* 2026-04-07 04:30:11.844665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-07 04:30:11.844692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 04:30:11.844711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.943221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.943322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-07 04:30:11.943340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 04:30:11.943384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 04:30:11.943413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:30:11.943447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.943461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-07 04:30:11.943473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.943484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.943503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 04:30:11.943515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:11.943541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:30:12.246177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 04:30:12.246262 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:12.246275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-07 04:30:12.246300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:12.246309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:12.246316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 04:30:12.246322 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:12.246362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-07 04:30:12.246378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-07 04:30:12.246389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:12.246398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:12.246415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-07 04:30:12.246432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:30:12.246451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-07 04:30:25.550564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:25.550655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:30:25.550681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-07 04:30:25.550687 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:25.550692 | orchestrator | 2026-04-07 04:30:25.550698 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-07 04:30:25.550703 | orchestrator | Tuesday 07 April 2026 04:30:13 +0000 (0:00:02.190) 0:08:49.489 ********* 2026-04-07 04:30:25.550708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-07 04:30:25.550715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-07 04:30:25.550722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:30:25.550736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-07 04:30:25.550741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-07 04:30:25.550745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:30:25.550750 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:25.550765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:30:25.550769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:30:25.550777 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:25.550781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-07 04:30:25.550785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-07 04:30:25.550789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:30:25.550793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-07 04:30:25.550796 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:25.550800 | orchestrator | 2026-04-07 04:30:25.550804 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-07 04:30:25.550808 | orchestrator | Tuesday 07 April 2026 04:30:15 +0000 (0:00:02.529) 0:08:52.019 ********* 2026-04-07 04:30:25.550812 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:25.550816 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:25.550819 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:25.550823 | orchestrator | 2026-04-07 04:30:25.550827 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-07 04:30:25.550831 | orchestrator | Tuesday 07 April 2026 04:30:17 +0000 (0:00:01.609) 0:08:53.629 ********* 2026-04-07 04:30:25.550834 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:25.550838 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:25.550842 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:25.550846 | orchestrator | 2026-04-07 04:30:25.550850 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-07 04:30:25.550853 | orchestrator | Tuesday 07 April 2026 04:30:20 +0000 (0:00:02.529) 0:08:56.158 ********* 2026-04-07 04:30:25.550860 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:30:25.550865 | orchestrator | 2026-04-07 04:30:25.550869 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-07 04:30:25.550873 | orchestrator | Tuesday 07 April 2026 04:30:22 +0000 (0:00:02.816) 0:08:58.975 ********* 2026-04-07 04:30:25.550881 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:30:41.279492 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:30:41.279600 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:30:41.279615 | orchestrator | 2026-04-07 04:30:41.279626 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-07 04:30:41.279635 | orchestrator | Tuesday 07 April 2026 04:30:26 +0000 (0:00:03.992) 0:09:02.968 ********* 2026-04-07 04:30:41.279659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:30:41.279668 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:41.279695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:30:41.279724 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:41.279732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:30:41.279740 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:41.279748 | orchestrator | 2026-04-07 04:30:41.279756 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-07 04:30:41.279764 | orchestrator | Tuesday 07 April 2026 04:30:28 +0000 (0:00:01.548) 0:09:04.516 ********* 2026-04-07 04:30:41.279772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 04:30:41.279781 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:41.279789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 04:30:41.279796 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:41.279804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-07 04:30:41.279811 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:41.279819 | orchestrator | 2026-04-07 04:30:41.279826 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-07 04:30:41.279834 | orchestrator | Tuesday 07 April 2026 04:30:30 +0000 (0:00:01.955) 0:09:06.472 ********* 2026-04-07 04:30:41.279841 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:41.279848 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:41.279855 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:41.279862 | orchestrator | 2026-04-07 04:30:41.279870 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-07 04:30:41.279878 | orchestrator | Tuesday 07 April 2026 04:30:32 +0000 (0:00:01.713) 0:09:08.185 ********* 2026-04-07 04:30:41.279885 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:41.279892 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:30:41.279900 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:30:41.279908 | orchestrator | 2026-04-07 04:30:41.279916 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-07 04:30:41.279954 | orchestrator | Tuesday 07 April 2026 04:30:34 +0000 (0:00:02.497) 0:09:10.683 ********* 2026-04-07 04:30:41.279962 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:30:41.279978 | orchestrator | 2026-04-07 04:30:41.279985 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-07 04:30:41.279993 | orchestrator | Tuesday 07 April 2026 04:30:37 +0000 (0:00:02.649) 0:09:13.333 ********* 2026-04-07 04:30:41.280073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-07 04:30:41.280101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-07 04:30:47.278488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-07 04:30:47.278593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-07 04:30:47.278619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-07 04:30:47.278638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-07 04:30:47.278645 | orchestrator | 2026-04-07 04:30:47.278652 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-07 04:30:47.278658 | orchestrator | Tuesday 07 April 2026 04:30:46 +0000 (0:00:09.417) 0:09:22.751 ********* 2026-04-07 04:30:47.278665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-07 04:30:47.278675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-07 04:30:47.278686 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:30:47.278692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-07 04:30:47.278703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-07 04:31:10.058186 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:31:10.058308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-07 04:31:10.058362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-07 04:31:10.058376 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:31:10.058387 | orchestrator | 2026-04-07 04:31:10.058399 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-07 04:31:10.058411 | orchestrator | Tuesday 07 April 2026 04:30:49 +0000 (0:00:02.338) 0:09:25.089 ********* 2026-04-07 04:31:10.058422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-07 04:31:10.058436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-07 04:31:10.058448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:31:10.058460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:31:10.058470 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:31:10.058480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-07 04:31:10.058490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-07 04:31:10.058519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:31:10.058532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:31:10.058552 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:31:10.058564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-07 04:31:10.058577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-07 04:31:10.058588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:31:10.058606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-07 04:31:10.058618 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:31:10.058629 | orchestrator | 2026-04-07 04:31:10.058642 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-07 04:31:10.058654 | orchestrator | Tuesday 07 April 2026 04:30:51 +0000 (0:00:02.507) 0:09:27.597 ********* 2026-04-07 04:31:10.058666 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:31:10.058677 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:31:10.058689 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:31:10.058701 | orchestrator | 2026-04-07 04:31:10.058712 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-07 04:31:10.058724 | orchestrator | Tuesday 07 April 2026 04:30:54 +0000 (0:00:02.515) 0:09:30.113 ********* 2026-04-07 04:31:10.058736 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:31:10.058748 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:31:10.058759 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:31:10.058770 | orchestrator | 2026-04-07 04:31:10.058782 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-07 04:31:10.058794 | orchestrator | Tuesday 07 April 2026 04:30:57 +0000 (0:00:03.242) 0:09:33.355 ********* 2026-04-07 04:31:10.058806 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:31:10.058815 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:31:10.058825 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:31:10.058835 | orchestrator | 2026-04-07 04:31:10.058845 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-07 04:31:10.058854 | orchestrator | Tuesday 07 April 2026 04:30:59 +0000 (0:00:01.823) 0:09:35.179 ********* 2026-04-07 04:31:10.058864 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:31:10.058874 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:31:10.058884 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:31:10.058894 | orchestrator | 2026-04-07 04:31:10.058903 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-07 04:31:10.058913 | orchestrator | Tuesday 07 April 2026 04:31:00 +0000 (0:00:01.713) 0:09:36.892 ********* 2026-04-07 04:31:10.058923 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:31:10.058933 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:31:10.058942 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:31:10.058952 | orchestrator | 2026-04-07 04:31:10.058962 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-07 04:31:10.058971 | orchestrator | Tuesday 07 April 2026 04:31:02 +0000 (0:00:01.662) 0:09:38.555 ********* 2026-04-07 04:31:10.059019 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:31:10.059035 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:31:10.059051 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:31:10.059066 | orchestrator | 2026-04-07 04:31:10.059082 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-07 04:31:10.059107 | orchestrator | Tuesday 07 April 2026 04:31:04 +0000 (0:00:01.731) 0:09:40.287 ********* 2026-04-07 04:31:10.059124 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:31:10.059139 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:31:10.059156 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:31:10.059172 | orchestrator | 2026-04-07 04:31:10.059189 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-07 04:31:10.059203 | orchestrator | Tuesday 07 April 2026 04:31:06 +0000 (0:00:01.834) 0:09:42.122 ********* 2026-04-07 04:31:10.059217 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:31:10.059233 | orchestrator | 2026-04-07 04:31:10.059247 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-07 04:31:10.059263 | orchestrator | Tuesday 07 April 2026 04:31:08 +0000 (0:00:02.427) 0:09:44.549 ********* 2026-04-07 04:31:10.059294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-07 04:31:15.362215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-07 04:31:15.362301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-07 04:31:15.362309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:31:15.362314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:31:15.362335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-07 04:31:15.362340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:31:15.362357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:31:15.362362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-07 04:31:15.362367 | orchestrator | 2026-04-07 04:31:15.362372 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-07 04:31:15.362378 | orchestrator | Tuesday 07 April 2026 04:31:13 +0000 (0:00:04.758) 0:09:49.308 ********* 2026-04-07 04:31:15.362383 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:31:15.362388 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:31:15.362393 | orchestrator | } 2026-04-07 04:31:15.362400 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:31:15.362404 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:31:15.362408 | orchestrator | } 2026-04-07 04:31:15.362412 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:31:15.362417 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:31:15.362421 | orchestrator | } 2026-04-07 04:31:15.362425 | orchestrator | 2026-04-07 04:31:15.362429 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:31:15.362434 | orchestrator | Tuesday 07 April 2026 04:31:14 +0000 (0:00:01.572) 0:09:50.880 ********* 2026-04-07 04:31:15.362438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-07 04:31:15.362446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:31:15.362451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:31:15.362455 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:31:15.362460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-07 04:31:15.362468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:33:19.670622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:33:19.670752 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:19.670770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-07 04:33:19.670804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-07 04:33:19.670901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-07 04:33:19.670936 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:19.670947 | orchestrator | 2026-04-07 04:33:19.670958 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-07 04:33:19.670970 | orchestrator | Tuesday 07 April 2026 04:31:17 +0000 (0:00:02.719) 0:09:53.600 ********* 2026-04-07 04:33:19.670984 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:19.670995 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:19.671005 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:19.671015 | orchestrator | 2026-04-07 04:33:19.671025 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-07 04:33:19.671034 | orchestrator | Tuesday 07 April 2026 04:31:19 +0000 (0:00:01.837) 0:09:55.438 ********* 2026-04-07 04:33:19.671044 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:19.671053 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:19.671063 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:19.671072 | orchestrator | 2026-04-07 04:33:19.671082 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-07 04:33:19.671091 | orchestrator | Tuesday 07 April 2026 04:31:21 +0000 (0:00:01.661) 0:09:57.100 ********* 2026-04-07 04:33:19.671101 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:33:19.671111 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:33:19.671120 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:33:19.671130 | orchestrator | 2026-04-07 04:33:19.671139 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-07 04:33:19.671149 | orchestrator | Tuesday 07 April 2026 04:31:28 +0000 (0:00:07.270) 0:10:04.370 ********* 2026-04-07 04:33:19.671159 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:33:19.671168 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:33:19.671249 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:33:19.671262 | orchestrator | 2026-04-07 04:33:19.671272 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-07 04:33:19.671282 | orchestrator | Tuesday 07 April 2026 04:31:35 +0000 (0:00:07.169) 0:10:11.541 ********* 2026-04-07 04:33:19.671291 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:33:19.671301 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:33:19.671310 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:33:19.671320 | orchestrator | 2026-04-07 04:33:19.671329 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-07 04:33:19.671339 | orchestrator | Tuesday 07 April 2026 04:31:42 +0000 (0:00:07.209) 0:10:18.751 ********* 2026-04-07 04:33:19.671349 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:33:19.671368 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:33:19.671378 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:33:19.671388 | orchestrator | 2026-04-07 04:33:19.671416 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-07 04:33:19.671427 | orchestrator | Tuesday 07 April 2026 04:31:51 +0000 (0:00:08.318) 0:10:27.070 ********* 2026-04-07 04:33:19.671436 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:19.671446 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:19.671456 | orchestrator | 2026-04-07 04:33:19.671465 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-07 04:33:19.671475 | orchestrator | Tuesday 07 April 2026 04:31:53 +0000 (0:00:02.886) 0:10:29.956 ********* 2026-04-07 04:33:19.671485 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:33:19.671494 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:33:19.671504 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:33:19.671513 | orchestrator | 2026-04-07 04:33:19.671530 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-07 04:33:19.671539 | orchestrator | Tuesday 07 April 2026 04:32:07 +0000 (0:00:13.518) 0:10:43.474 ********* 2026-04-07 04:33:19.671549 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:19.671559 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:19.671568 | orchestrator | 2026-04-07 04:33:19.671578 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-07 04:33:19.671588 | orchestrator | Tuesday 07 April 2026 04:32:12 +0000 (0:00:04.666) 0:10:48.141 ********* 2026-04-07 04:33:19.671597 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:33:19.671607 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:33:19.671616 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:33:19.671626 | orchestrator | 2026-04-07 04:33:19.671636 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-07 04:33:19.671645 | orchestrator | Tuesday 07 April 2026 04:32:19 +0000 (0:00:07.606) 0:10:55.748 ********* 2026-04-07 04:33:19.671655 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:19.671664 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:19.671674 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:33:19.671684 | orchestrator | 2026-04-07 04:33:19.671693 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-07 04:33:19.671703 | orchestrator | Tuesday 07 April 2026 04:32:26 +0000 (0:00:06.884) 0:11:02.633 ********* 2026-04-07 04:33:19.671713 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:19.671722 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:19.671732 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:33:19.671741 | orchestrator | 2026-04-07 04:33:19.671751 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-07 04:33:19.671761 | orchestrator | Tuesday 07 April 2026 04:32:33 +0000 (0:00:06.918) 0:11:09.551 ********* 2026-04-07 04:33:19.671770 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:19.671780 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:19.671790 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:33:19.671799 | orchestrator | 2026-04-07 04:33:19.671809 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-07 04:33:19.671818 | orchestrator | Tuesday 07 April 2026 04:32:40 +0000 (0:00:06.977) 0:11:16.529 ********* 2026-04-07 04:33:19.671828 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:19.671838 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:19.671847 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:33:19.671856 | orchestrator | 2026-04-07 04:33:19.671866 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-04-07 04:33:19.671876 | orchestrator | Tuesday 07 April 2026 04:32:48 +0000 (0:00:07.696) 0:11:24.226 ********* 2026-04-07 04:33:19.671885 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:19.671895 | orchestrator | 2026-04-07 04:33:19.671904 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-07 04:33:19.671914 | orchestrator | Tuesday 07 April 2026 04:32:51 +0000 (0:00:03.711) 0:11:27.937 ********* 2026-04-07 04:33:19.671930 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:19.671939 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:19.671949 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:33:19.671959 | orchestrator | 2026-04-07 04:33:19.671968 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-04-07 04:33:19.671978 | orchestrator | Tuesday 07 April 2026 04:33:05 +0000 (0:00:13.382) 0:11:41.320 ********* 2026-04-07 04:33:19.671987 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:19.671997 | orchestrator | 2026-04-07 04:33:19.672007 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-07 04:33:19.672016 | orchestrator | Tuesday 07 April 2026 04:33:08 +0000 (0:00:03.624) 0:11:44.944 ********* 2026-04-07 04:33:19.672026 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:19.672035 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:19.672045 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:33:19.672054 | orchestrator | 2026-04-07 04:33:19.672064 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-07 04:33:19.672074 | orchestrator | Tuesday 07 April 2026 04:33:16 +0000 (0:00:07.253) 0:11:52.197 ********* 2026-04-07 04:33:19.672083 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:19.672093 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:19.672102 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:19.672112 | orchestrator | 2026-04-07 04:33:19.672122 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-07 04:33:19.672131 | orchestrator | Tuesday 07 April 2026 04:33:18 +0000 (0:00:02.543) 0:11:54.741 ********* 2026-04-07 04:33:19.672141 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:19.672151 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:19.672160 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:19.672170 | orchestrator | 2026-04-07 04:33:19.672204 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:33:19.672215 | orchestrator | testbed-node-0 : ok=129  changed=30  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-07 04:33:19.672226 | orchestrator | testbed-node-1 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-07 04:33:19.672243 | orchestrator | testbed-node-2 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-07 04:33:22.096342 | orchestrator | 2026-04-07 04:33:22.096435 | orchestrator | 2026-04-07 04:33:22.096447 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:33:22.096456 | orchestrator | Tuesday 07 April 2026 04:33:21 +0000 (0:00:02.564) 0:11:57.305 ********* 2026-04-07 04:33:22.096462 | orchestrator | =============================================================================== 2026-04-07 04:33:22.096469 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.52s 2026-04-07 04:33:22.096476 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.38s 2026-04-07 04:33:22.096482 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 9.42s 2026-04-07 04:33:22.096503 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.32s 2026-04-07 04:33:22.096510 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.97s 2026-04-07 04:33:22.096516 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.85s 2026-04-07 04:33:22.096524 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.70s 2026-04-07 04:33:22.096535 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.61s 2026-04-07 04:33:22.096545 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.27s 2026-04-07 04:33:22.096555 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.25s 2026-04-07 04:33:22.096564 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.21s 2026-04-07 04:33:22.096595 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.17s 2026-04-07 04:33:22.096605 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.98s 2026-04-07 04:33:22.096614 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.92s 2026-04-07 04:33:22.096623 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.88s 2026-04-07 04:33:22.096632 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.71s 2026-04-07 04:33:22.096642 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 6.52s 2026-04-07 04:33:22.096653 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.43s 2026-04-07 04:33:22.096662 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.19s 2026-04-07 04:33:22.096674 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 5.71s 2026-04-07 04:33:22.329984 | orchestrator | + osism apply -a upgrade opensearch 2026-04-07 04:33:23.811048 | orchestrator | 2026-04-07 04:33:23 | INFO  | Prepare task for execution of opensearch. 2026-04-07 04:33:23.893824 | orchestrator | 2026-04-07 04:33:23 | INFO  | Task 85db7494-086f-47b4-8e1f-5f26412a9895 (opensearch) was prepared for execution. 2026-04-07 04:33:23.893920 | orchestrator | 2026-04-07 04:33:23 | INFO  | It takes a moment until task 85db7494-086f-47b4-8e1f-5f26412a9895 (opensearch) has been started and output is visible here. 2026-04-07 04:33:35.418085 | orchestrator | 2026-04-07 04:33:35.418255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 04:33:35.418276 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-07 04:33:35.418290 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-07 04:33:35.418312 | orchestrator | 2026-04-07 04:33:35.418324 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 04:33:35.418335 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-07 04:33:35.418346 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-07 04:33:35.418368 | orchestrator | Tuesday 07 April 2026 04:33:28 +0000 (0:00:01.195) 0:00:01.195 ********* 2026-04-07 04:33:35.418379 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:35.418391 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:35.418401 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:35.418412 | orchestrator | 2026-04-07 04:33:35.418423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 04:33:35.418434 | orchestrator | Tuesday 07 April 2026 04:33:29 +0000 (0:00:00.793) 0:00:01.988 ********* 2026-04-07 04:33:35.418445 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-07 04:33:35.418456 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-07 04:33:35.418467 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-07 04:33:35.418477 | orchestrator | 2026-04-07 04:33:35.418488 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-07 04:33:35.418499 | orchestrator | 2026-04-07 04:33:35.418510 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 04:33:35.418521 | orchestrator | Tuesday 07 April 2026 04:33:30 +0000 (0:00:00.834) 0:00:02.822 ********* 2026-04-07 04:33:35.418532 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:33:35.418544 | orchestrator | 2026-04-07 04:33:35.418554 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-07 04:33:35.418566 | orchestrator | Tuesday 07 April 2026 04:33:31 +0000 (0:00:01.455) 0:00:04.278 ********* 2026-04-07 04:33:35.418605 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 04:33:35.418618 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 04:33:35.418630 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-07 04:33:35.418642 | orchestrator | 2026-04-07 04:33:35.418655 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-07 04:33:35.418668 | orchestrator | Tuesday 07 April 2026 04:33:33 +0000 (0:00:02.115) 0:00:06.394 ********* 2026-04-07 04:33:35.418698 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:35.418717 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:35.418750 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:35.418767 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:35.418796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:35.418820 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:39.531500 | orchestrator | 2026-04-07 04:33:39.531609 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 04:33:39.531628 | orchestrator | Tuesday 07 April 2026 04:33:35 +0000 (0:00:01.728) 0:00:08.122 ********* 2026-04-07 04:33:39.531640 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:33:39.531665 | orchestrator | 2026-04-07 04:33:39.531677 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-07 04:33:39.531689 | orchestrator | Tuesday 07 April 2026 04:33:36 +0000 (0:00:01.179) 0:00:09.302 ********* 2026-04-07 04:33:39.531703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:39.531771 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:39.531796 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:39.531837 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:39.531863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:39.531917 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:39.531941 | orchestrator | 2026-04-07 04:33:39.531961 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-07 04:33:39.531981 | orchestrator | Tuesday 07 April 2026 04:33:39 +0000 (0:00:02.327) 0:00:11.629 ********* 2026-04-07 04:33:39.531999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:33:39.532038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:33:40.912176 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:33:40.912299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:33:40.912331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:33:40.912340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:33:40.912347 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:40.912366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:33:40.912389 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:40.912395 | orchestrator | 2026-04-07 04:33:40.912402 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-07 04:33:40.912410 | orchestrator | Tuesday 07 April 2026 04:33:40 +0000 (0:00:00.996) 0:00:12.626 ********* 2026-04-07 04:33:40.912416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:33:40.912427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:33:40.912434 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:33:40.912440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:33:40.912452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:33:43.757114 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:33:43.757311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:33:43.757336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:33:43.757350 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:33:43.757362 | orchestrator | 2026-04-07 04:33:43.757375 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-07 04:33:43.757388 | orchestrator | Tuesday 07 April 2026 04:33:41 +0000 (0:00:01.335) 0:00:13.962 ********* 2026-04-07 04:33:43.757399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:43.757454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:43.757473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:43.757486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:43.757499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:43.757529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:51.856846 | orchestrator | 2026-04-07 04:33:51.856982 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-07 04:33:51.857004 | orchestrator | Tuesday 07 April 2026 04:33:43 +0000 (0:00:02.498) 0:00:16.461 ********* 2026-04-07 04:33:51.857017 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:51.857034 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:51.857053 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:51.857071 | orchestrator | 2026-04-07 04:33:51.857089 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-07 04:33:51.857106 | orchestrator | Tuesday 07 April 2026 04:33:46 +0000 (0:00:02.599) 0:00:19.061 ********* 2026-04-07 04:33:51.857145 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:33:51.857164 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:33:51.857206 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:33:51.857272 | orchestrator | 2026-04-07 04:33:51.857292 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-07 04:33:51.857311 | orchestrator | Tuesday 07 April 2026 04:33:48 +0000 (0:00:02.309) 0:00:21.370 ********* 2026-04-07 04:33:51.857334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:51.857358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:51.857406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-07 04:33:51.857468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:51.857495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:51.857529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-07 04:33:51.857550 | orchestrator | 2026-04-07 04:33:51.857569 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-07 04:33:51.857590 | orchestrator | Tuesday 07 April 2026 04:33:51 +0000 (0:00:02.350) 0:00:23.721 ********* 2026-04-07 04:33:51.857608 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:33:51.857627 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:33:51.857646 | orchestrator | } 2026-04-07 04:33:51.857664 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:33:51.857682 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:33:51.857701 | orchestrator | } 2026-04-07 04:33:51.857719 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:33:51.857737 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:33:51.857756 | orchestrator | } 2026-04-07 04:33:51.857774 | orchestrator | 2026-04-07 04:33:51.857793 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:33:51.857812 | orchestrator | Tuesday 07 April 2026 04:33:51 +0000 (0:00:00.383) 0:00:24.104 ********* 2026-04-07 04:33:51.857844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:37:03.250161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:37:03.250220 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:37:03.250227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:37:03.250233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:37:03.250237 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:37:03.250252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-07 04:37:03.250257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-07 04:37:03.250270 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:37:03.250275 | orchestrator | 2026-04-07 04:37:03.250280 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 04:37:03.250285 | orchestrator | Tuesday 07 April 2026 04:33:53 +0000 (0:00:01.855) 0:00:25.960 ********* 2026-04-07 04:37:03.250289 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:37:03.250293 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:37:03.250297 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:37:03.250301 | orchestrator | 2026-04-07 04:37:03.250306 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 04:37:03.250310 | orchestrator | Tuesday 07 April 2026 04:33:53 +0000 (0:00:00.372) 0:00:26.332 ********* 2026-04-07 04:37:03.250314 | orchestrator | 2026-04-07 04:37:03.250318 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 04:37:03.250322 | orchestrator | Tuesday 07 April 2026 04:33:53 +0000 (0:00:00.088) 0:00:26.421 ********* 2026-04-07 04:37:03.250326 | orchestrator | 2026-04-07 04:37:03.250330 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-07 04:37:03.250334 | orchestrator | Tuesday 07 April 2026 04:33:53 +0000 (0:00:00.074) 0:00:26.496 ********* 2026-04-07 04:37:03.250338 | orchestrator | 2026-04-07 04:37:03.250342 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-07 04:37:03.250346 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-07 04:37:03.250351 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-07 04:37:03.250359 | orchestrator | Tuesday 07 April 2026 04:33:53 +0000 (0:00:00.074) 0:00:26.570 ********* 2026-04-07 04:37:03.250363 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:37:03.250368 | orchestrator | 2026-04-07 04:37:03.250372 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-07 04:37:03.250376 | orchestrator | Tuesday 07 April 2026 04:33:56 +0000 (0:00:02.688) 0:00:29.258 ********* 2026-04-07 04:37:03.250380 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:37:03.250385 | orchestrator | 2026-04-07 04:37:03.250389 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-07 04:37:03.250393 | orchestrator | Tuesday 07 April 2026 04:34:00 +0000 (0:00:03.627) 0:00:32.886 ********* 2026-04-07 04:37:03.250397 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:37:03.250401 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:37:03.250405 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:37:03.250409 | orchestrator | 2026-04-07 04:37:03.250413 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-07 04:37:03.250417 | orchestrator | Tuesday 07 April 2026 04:35:18 +0000 (0:01:18.692) 0:01:51.579 ********* 2026-04-07 04:37:03.250421 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:37:03.250425 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:37:03.250429 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:37:03.250433 | orchestrator | 2026-04-07 04:37:03.250437 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-07 04:37:03.250441 | orchestrator | Tuesday 07 April 2026 04:36:57 +0000 (0:01:38.105) 0:03:29.685 ********* 2026-04-07 04:37:03.250449 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:37:03.250453 | orchestrator | 2026-04-07 04:37:03.250457 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-07 04:37:03.250461 | orchestrator | Tuesday 07 April 2026 04:36:58 +0000 (0:00:01.120) 0:03:30.805 ********* 2026-04-07 04:37:03.250465 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:37:03.250484 | orchestrator | 2026-04-07 04:37:03.250488 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-07 04:37:03.250492 | orchestrator | Tuesday 07 April 2026 04:37:00 +0000 (0:00:02.615) 0:03:33.420 ********* 2026-04-07 04:37:03.250496 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:37:03.250500 | orchestrator | 2026-04-07 04:37:03.250508 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-07 04:37:07.840708 | orchestrator | Tuesday 07 April 2026 04:37:03 +0000 (0:00:02.410) 0:03:35.831 ********* 2026-04-07 04:37:07.840823 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:37:07.840841 | orchestrator | 2026-04-07 04:37:07.840855 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-07 04:37:07.840867 | orchestrator | Tuesday 07 April 2026 04:37:05 +0000 (0:00:02.599) 0:03:38.430 ********* 2026-04-07 04:37:07.840974 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:37:07.840996 | orchestrator | 2026-04-07 04:37:07.841008 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-07 04:37:07.841020 | orchestrator | Tuesday 07 April 2026 04:37:06 +0000 (0:00:00.237) 0:03:38.667 ********* 2026-04-07 04:37:07.841032 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:37:07.841043 | orchestrator | 2026-04-07 04:37:07.841055 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:37:07.841068 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:37:07.841081 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 04:37:07.841092 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 04:37:07.841103 | orchestrator | 2026-04-07 04:37:07.841114 | orchestrator | 2026-04-07 04:37:07.841126 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:37:07.841137 | orchestrator | Tuesday 07 April 2026 04:37:07 +0000 (0:00:01.223) 0:03:39.891 ********* 2026-04-07 04:37:07.841148 | orchestrator | =============================================================================== 2026-04-07 04:37:07.841160 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 98.11s 2026-04-07 04:37:07.841171 | orchestrator | opensearch : Restart opensearch container ------------------------------ 78.69s 2026-04-07 04:37:07.841182 | orchestrator | opensearch : Perform a flush -------------------------------------------- 3.63s 2026-04-07 04:37:07.841193 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 2.69s 2026-04-07 04:37:07.841204 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.62s 2026-04-07 04:37:07.841215 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.60s 2026-04-07 04:37:07.841227 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.60s 2026-04-07 04:37:07.841238 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.50s 2026-04-07 04:37:07.841252 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.41s 2026-04-07 04:37:07.841266 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.35s 2026-04-07 04:37:07.841279 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.33s 2026-04-07 04:37:07.841313 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.31s 2026-04-07 04:37:07.841327 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.12s 2026-04-07 04:37:07.841341 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.86s 2026-04-07 04:37:07.841355 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.73s 2026-04-07 04:37:07.841368 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.46s 2026-04-07 04:37:07.841382 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.34s 2026-04-07 04:37:07.841395 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.22s 2026-04-07 04:37:07.841409 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.18s 2026-04-07 04:37:07.841422 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.12s 2026-04-07 04:37:08.035055 | orchestrator | + osism apply -a upgrade memcached 2026-04-07 04:37:09.364710 | orchestrator | 2026-04-07 04:37:09 | INFO  | Prepare task for execution of memcached. 2026-04-07 04:37:09.431729 | orchestrator | 2026-04-07 04:37:09 | INFO  | Task b92d4f45-4de2-44bc-9dd6-ff89f6e6cbd9 (memcached) was prepared for execution. 2026-04-07 04:37:09.431822 | orchestrator | 2026-04-07 04:37:09 | INFO  | It takes a moment until task b92d4f45-4de2-44bc-9dd6-ff89f6e6cbd9 (memcached) has been started and output is visible here. 2026-04-07 04:37:42.609026 | orchestrator | 2026-04-07 04:37:42.609146 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 04:37:42.609164 | orchestrator | 2026-04-07 04:37:42.609177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 04:37:42.609191 | orchestrator | Tuesday 07 April 2026 04:37:14 +0000 (0:00:01.700) 0:00:01.700 ********* 2026-04-07 04:37:42.609202 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:37:42.609215 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:37:42.609226 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:37:42.609238 | orchestrator | 2026-04-07 04:37:42.609250 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 04:37:42.609261 | orchestrator | Tuesday 07 April 2026 04:37:16 +0000 (0:00:01.833) 0:00:03.534 ********* 2026-04-07 04:37:42.609273 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-07 04:37:42.609285 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-07 04:37:42.609297 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-07 04:37:42.609308 | orchestrator | 2026-04-07 04:37:42.609320 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-07 04:37:42.609331 | orchestrator | 2026-04-07 04:37:42.609358 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-07 04:37:42.609371 | orchestrator | Tuesday 07 April 2026 04:37:18 +0000 (0:00:01.948) 0:00:05.483 ********* 2026-04-07 04:37:42.609383 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:37:42.609396 | orchestrator | 2026-04-07 04:37:42.609408 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-07 04:37:42.609419 | orchestrator | Tuesday 07 April 2026 04:37:21 +0000 (0:00:03.269) 0:00:08.752 ********* 2026-04-07 04:37:42.609431 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-07 04:37:42.609443 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-07 04:37:42.609455 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-07 04:37:42.609467 | orchestrator | 2026-04-07 04:37:42.609479 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-07 04:37:42.609491 | orchestrator | Tuesday 07 April 2026 04:37:23 +0000 (0:00:02.279) 0:00:11.031 ********* 2026-04-07 04:37:42.609502 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-07 04:37:42.609544 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-07 04:37:42.609580 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-07 04:37:42.609594 | orchestrator | 2026-04-07 04:37:42.609607 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-07 04:37:42.609625 | orchestrator | Tuesday 07 April 2026 04:37:26 +0000 (0:00:02.573) 0:00:13.605 ********* 2026-04-07 04:37:42.609649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 04:37:42.609673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 04:37:42.609722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-07 04:37:42.609750 | orchestrator | 2026-04-07 04:37:42.609770 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-07 04:37:42.609789 | orchestrator | Tuesday 07 April 2026 04:37:28 +0000 (0:00:02.234) 0:00:15.840 ********* 2026-04-07 04:37:42.609808 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:37:42.609828 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:37:42.609849 | orchestrator | } 2026-04-07 04:37:42.609869 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:37:42.609888 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:37:42.609902 | orchestrator | } 2026-04-07 04:37:42.609915 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:37:42.609934 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:37:42.609952 | orchestrator | } 2026-04-07 04:37:42.609971 | orchestrator | 2026-04-07 04:37:42.609989 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:37:42.610008 | orchestrator | Tuesday 07 April 2026 04:37:30 +0000 (0:00:01.348) 0:00:17.188 ********* 2026-04-07 04:37:42.610100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 04:37:42.610126 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:37:42.610138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 04:37:42.610150 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:37:42.610161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-07 04:37:42.610172 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:37:42.610183 | orchestrator | 2026-04-07 04:37:42.610194 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-07 04:37:42.610206 | orchestrator | Tuesday 07 April 2026 04:37:32 +0000 (0:00:02.112) 0:00:19.301 ********* 2026-04-07 04:37:42.610216 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:37:42.610228 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:37:42.610238 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:37:42.610249 | orchestrator | 2026-04-07 04:37:42.610260 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:37:42.610272 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:37:42.610284 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:37:42.610295 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:37:42.610306 | orchestrator | 2026-04-07 04:37:42.610317 | orchestrator | 2026-04-07 04:37:42.610328 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:37:42.610350 | orchestrator | Tuesday 07 April 2026 04:37:42 +0000 (0:00:10.457) 0:00:29.758 ********* 2026-04-07 04:37:42.968680 | orchestrator | =============================================================================== 2026-04-07 04:37:42.968783 | orchestrator | memcached : Restart memcached container -------------------------------- 10.46s 2026-04-07 04:37:42.968798 | orchestrator | memcached : include_tasks ----------------------------------------------- 3.27s 2026-04-07 04:37:42.968809 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.57s 2026-04-07 04:37:42.968852 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.28s 2026-04-07 04:37:42.968872 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.23s 2026-04-07 04:37:42.968895 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.11s 2026-04-07 04:37:42.968915 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.95s 2026-04-07 04:37:42.968930 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.83s 2026-04-07 04:37:42.968956 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.35s 2026-04-07 04:37:43.159852 | orchestrator | + osism apply -a upgrade redis 2026-04-07 04:37:44.510744 | orchestrator | 2026-04-07 04:37:44 | INFO  | Prepare task for execution of redis. 2026-04-07 04:37:44.586445 | orchestrator | 2026-04-07 04:37:44 | INFO  | Task fb241b10-8fd2-4652-9d14-386d554dfd92 (redis) was prepared for execution. 2026-04-07 04:37:44.586567 | orchestrator | 2026-04-07 04:37:44 | INFO  | It takes a moment until task fb241b10-8fd2-4652-9d14-386d554dfd92 (redis) has been started and output is visible here. 2026-04-07 04:37:56.160120 | orchestrator | 2026-04-07 04:37:56.160218 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 04:37:56.160234 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-07 04:37:56.160247 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-07 04:37:56.160269 | orchestrator | 2026-04-07 04:37:56.160280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 04:37:56.160291 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-07 04:37:56.160302 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-07 04:37:56.160324 | orchestrator | Tuesday 07 April 2026 04:37:49 +0000 (0:00:01.110) 0:00:01.110 ********* 2026-04-07 04:37:56.160335 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:37:56.160347 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:37:56.160357 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:37:56.160368 | orchestrator | 2026-04-07 04:37:56.160379 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 04:37:56.160390 | orchestrator | Tuesday 07 April 2026 04:37:49 +0000 (0:00:00.944) 0:00:02.055 ********* 2026-04-07 04:37:56.160401 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-07 04:37:56.160412 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-07 04:37:56.160423 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-07 04:37:56.160434 | orchestrator | 2026-04-07 04:37:56.160445 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-07 04:37:56.160455 | orchestrator | 2026-04-07 04:37:56.160466 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-07 04:37:56.160477 | orchestrator | Tuesday 07 April 2026 04:37:50 +0000 (0:00:00.955) 0:00:03.010 ********* 2026-04-07 04:37:56.160488 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:37:56.160499 | orchestrator | 2026-04-07 04:37:56.160510 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-07 04:37:56.160521 | orchestrator | Tuesday 07 April 2026 04:37:52 +0000 (0:00:01.539) 0:00:04.550 ********* 2026-04-07 04:37:56.160583 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160622 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160635 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160659 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160692 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160707 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160720 | orchestrator | 2026-04-07 04:37:56.160734 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-07 04:37:56.160747 | orchestrator | Tuesday 07 April 2026 04:37:54 +0000 (0:00:01.722) 0:00:06.272 ********* 2026-04-07 04:37:56.160761 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160783 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160796 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160814 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:37:56.160836 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.435845 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.435949 | orchestrator | 2026-04-07 04:38:01.435964 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-07 04:38:01.435977 | orchestrator | Tuesday 07 April 2026 04:37:56 +0000 (0:00:02.143) 0:00:08.415 ********* 2026-04-07 04:38:01.436013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436026 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436038 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436061 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436073 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436100 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436112 | orchestrator | 2026-04-07 04:38:01.436123 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-07 04:38:01.436141 | orchestrator | Tuesday 07 April 2026 04:37:59 +0000 (0:00:03.079) 0:00:11.495 ********* 2026-04-07 04:38:01.436153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:38:01.436222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-07 04:38:23.941640 | orchestrator | 2026-04-07 04:38:23.941740 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-07 04:38:23.941754 | orchestrator | Tuesday 07 April 2026 04:38:01 +0000 (0:00:02.103) 0:00:13.599 ********* 2026-04-07 04:38:23.941763 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:38:23.941773 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:38:23.941782 | orchestrator | } 2026-04-07 04:38:23.941790 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:38:23.941799 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:38:23.941807 | orchestrator | } 2026-04-07 04:38:23.941815 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:38:23.941823 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:38:23.941831 | orchestrator | } 2026-04-07 04:38:23.941839 | orchestrator | 2026-04-07 04:38:23.941848 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:38:23.941857 | orchestrator | Tuesday 07 April 2026 04:38:01 +0000 (0:00:00.346) 0:00:13.945 ********* 2026-04-07 04:38:23.941867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-07 04:38:23.941878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-07 04:38:23.941888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-07 04:38:23.941910 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:38:23.941919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-07 04:38:23.941928 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:38:23.941936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-07 04:38:23.941979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-07 04:38:23.941989 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:38:23.941997 | orchestrator | 2026-04-07 04:38:23.942006 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 04:38:23.942014 | orchestrator | Tuesday 07 April 2026 04:38:03 +0000 (0:00:01.345) 0:00:15.290 ********* 2026-04-07 04:38:23.942089 | orchestrator | 2026-04-07 04:38:23.942103 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 04:38:23.942116 | orchestrator | Tuesday 07 April 2026 04:38:03 +0000 (0:00:00.091) 0:00:15.381 ********* 2026-04-07 04:38:23.942128 | orchestrator | 2026-04-07 04:38:23.942142 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-07 04:38:23.942156 | orchestrator | Tuesday 07 April 2026 04:38:03 +0000 (0:00:00.077) 0:00:15.459 ********* 2026-04-07 04:38:23.942170 | orchestrator | 2026-04-07 04:38:23.942185 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-07 04:38:23.942201 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-07 04:38:23.942218 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-07 04:38:23.942238 | orchestrator | Tuesday 07 April 2026 04:38:03 +0000 (0:00:00.073) 0:00:15.533 ********* 2026-04-07 04:38:23.942247 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:38:23.942257 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:38:23.942266 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:38:23.942275 | orchestrator | 2026-04-07 04:38:23.942284 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-07 04:38:23.942294 | orchestrator | Tuesday 07 April 2026 04:38:13 +0000 (0:00:09.599) 0:00:25.133 ********* 2026-04-07 04:38:23.942302 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:38:23.942311 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:38:23.942321 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:38:23.942330 | orchestrator | 2026-04-07 04:38:23.942338 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:38:23.942347 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:38:23.942357 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:38:23.942365 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:38:23.942373 | orchestrator | 2026-04-07 04:38:23.942381 | orchestrator | 2026-04-07 04:38:23.942398 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:38:23.942411 | orchestrator | Tuesday 07 April 2026 04:38:23 +0000 (0:00:10.597) 0:00:35.730 ********* 2026-04-07 04:38:23.942419 | orchestrator | =============================================================================== 2026-04-07 04:38:23.942427 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.60s 2026-04-07 04:38:23.942435 | orchestrator | redis : Restart redis container ----------------------------------------- 9.60s 2026-04-07 04:38:23.942443 | orchestrator | redis : Copying over redis config files --------------------------------- 3.08s 2026-04-07 04:38:23.942450 | orchestrator | redis : Copying over default config.json files -------------------------- 2.14s 2026-04-07 04:38:23.942458 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.10s 2026-04-07 04:38:23.942466 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.72s 2026-04-07 04:38:23.942474 | orchestrator | redis : include_tasks --------------------------------------------------- 1.54s 2026-04-07 04:38:23.942482 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.35s 2026-04-07 04:38:23.942490 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-04-07 04:38:23.942498 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2026-04-07 04:38:23.942506 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.35s 2026-04-07 04:38:23.942514 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-04-07 04:38:24.139169 | orchestrator | + osism apply -a upgrade mariadb 2026-04-07 04:38:25.434005 | orchestrator | 2026-04-07 04:38:25 | INFO  | Prepare task for execution of mariadb. 2026-04-07 04:38:25.503329 | orchestrator | 2026-04-07 04:38:25 | INFO  | Task 82c96e6b-eeec-4a20-bb51-e3d5d69b097f (mariadb) was prepared for execution. 2026-04-07 04:38:25.503438 | orchestrator | 2026-04-07 04:38:25 | INFO  | It takes a moment until task 82c96e6b-eeec-4a20-bb51-e3d5d69b097f (mariadb) has been started and output is visible here. 2026-04-07 04:38:39.913713 | orchestrator | 2026-04-07 04:38:39.913833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 04:38:39.913850 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-07 04:38:39.913864 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-07 04:38:39.913887 | orchestrator | 2026-04-07 04:38:39.913898 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 04:38:39.913909 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-07 04:38:39.913920 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-07 04:38:39.913941 | orchestrator | Tuesday 07 April 2026 04:38:30 +0000 (0:00:01.560) 0:00:01.560 ********* 2026-04-07 04:38:39.913952 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:38:39.913964 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:38:39.913975 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:38:39.913986 | orchestrator | 2026-04-07 04:38:39.913997 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 04:38:39.914008 | orchestrator | Tuesday 07 April 2026 04:38:31 +0000 (0:00:01.014) 0:00:02.574 ********* 2026-04-07 04:38:39.914073 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-07 04:38:39.914087 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-07 04:38:39.914098 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-07 04:38:39.914109 | orchestrator | 2026-04-07 04:38:39.914120 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-07 04:38:39.914158 | orchestrator | 2026-04-07 04:38:39.914169 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-07 04:38:39.914181 | orchestrator | Tuesday 07 April 2026 04:38:32 +0000 (0:00:00.912) 0:00:03.487 ********* 2026-04-07 04:38:39.914195 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:38:39.914209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 04:38:39.914221 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 04:38:39.914234 | orchestrator | 2026-04-07 04:38:39.914246 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 04:38:39.914264 | orchestrator | Tuesday 07 April 2026 04:38:32 +0000 (0:00:00.514) 0:00:04.001 ********* 2026-04-07 04:38:39.914281 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:38:39.914296 | orchestrator | 2026-04-07 04:38:39.914308 | orchestrator | TASK [mariadb : Remove mariadb-clustercheck] *********************************** 2026-04-07 04:38:39.914321 | orchestrator | Tuesday 07 April 2026 04:38:34 +0000 (0:00:01.669) 0:00:05.670 ********* 2026-04-07 04:38:39.914334 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:38:39.914346 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:38:39.914359 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:38:39.914372 | orchestrator | 2026-04-07 04:38:39.914384 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-07 04:38:39.914396 | orchestrator | Tuesday 07 April 2026 04:38:36 +0000 (0:00:02.071) 0:00:07.742 ********* 2026-04-07 04:38:39.914453 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:38:39.914473 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:38:39.914504 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:38:39.914520 | orchestrator | 2026-04-07 04:38:39.914534 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-07 04:38:39.914547 | orchestrator | Tuesday 07 April 2026 04:38:39 +0000 (0:00:02.733) 0:00:10.475 ********* 2026-04-07 04:38:39.914559 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:38:39.914570 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:38:39.914606 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:38:39.914618 | orchestrator | 2026-04-07 04:38:39.914629 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-07 04:38:39.914648 | orchestrator | Tuesday 07 April 2026 04:38:39 +0000 (0:00:00.607) 0:00:11.083 ********* 2026-04-07 04:38:52.501108 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:38:52.501224 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:38:52.501239 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:38:52.501252 | orchestrator | 2026-04-07 04:38:52.501266 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-07 04:38:52.501303 | orchestrator | Tuesday 07 April 2026 04:38:41 +0000 (0:00:01.204) 0:00:12.287 ********* 2026-04-07 04:38:52.501322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:38:52.501354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:38:52.501388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:38:52.501412 | orchestrator | 2026-04-07 04:38:52.501424 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-07 04:38:52.501436 | orchestrator | Tuesday 07 April 2026 04:38:44 +0000 (0:00:03.336) 0:00:15.623 ********* 2026-04-07 04:38:52.501447 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:38:52.501458 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:38:52.501468 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:38:52.501479 | orchestrator | 2026-04-07 04:38:52.501490 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-07 04:38:52.501501 | orchestrator | Tuesday 07 April 2026 04:38:45 +0000 (0:00:01.080) 0:00:16.703 ********* 2026-04-07 04:38:52.501512 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:38:52.501523 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:38:52.501533 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:38:52.501544 | orchestrator | 2026-04-07 04:38:52.501555 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 04:38:52.501566 | orchestrator | Tuesday 07 April 2026 04:38:49 +0000 (0:00:03.902) 0:00:20.606 ********* 2026-04-07 04:38:52.501577 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:38:52.501589 | orchestrator | 2026-04-07 04:38:52.501633 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-07 04:38:52.501647 | orchestrator | Tuesday 07 April 2026 04:38:50 +0000 (0:00:00.890) 0:00:21.497 ********* 2026-04-07 04:38:52.501670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:38:55.206189 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:38:55.206337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:38:55.206364 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:38:55.206398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:38:55.206432 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:38:55.206444 | orchestrator | 2026-04-07 04:38:55.206490 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-07 04:38:55.206505 | orchestrator | Tuesday 07 April 2026 04:38:52 +0000 (0:00:02.640) 0:00:24.138 ********* 2026-04-07 04:38:55.206539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:38:55.206553 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:38:55.206571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:38:55.206673 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:38:55.206704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:01.386444 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:01.386567 | orchestrator | 2026-04-07 04:39:01.386586 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-07 04:39:01.386599 | orchestrator | Tuesday 07 April 2026 04:38:55 +0000 (0:00:02.333) 0:00:26.471 ********* 2026-04-07 04:39:01.386704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:01.386746 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:01.386760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:01.386773 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:01.386812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:01.386835 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:01.386847 | orchestrator | 2026-04-07 04:39:01.386858 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-07 04:39:01.386870 | orchestrator | Tuesday 07 April 2026 04:38:58 +0000 (0:00:03.028) 0:00:29.501 ********* 2026-04-07 04:39:01.386882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:39:01.386910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:39:05.124747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-07 04:39:05.124839 | orchestrator | 2026-04-07 04:39:05.124849 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-07 04:39:05.124857 | orchestrator | Tuesday 07 April 2026 04:39:01 +0000 (0:00:03.278) 0:00:32.779 ********* 2026-04-07 04:39:05.124865 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:39:05.124872 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:39:05.124879 | orchestrator | } 2026-04-07 04:39:05.124885 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:39:05.124892 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:39:05.124898 | orchestrator | } 2026-04-07 04:39:05.124904 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:39:05.124910 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:39:05.124916 | orchestrator | } 2026-04-07 04:39:05.124922 | orchestrator | 2026-04-07 04:39:05.124929 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:39:05.124935 | orchestrator | Tuesday 07 April 2026 04:39:01 +0000 (0:00:00.357) 0:00:33.136 ********* 2026-04-07 04:39:05.124970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:05.124996 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:05.125003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:05.125010 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:05.125021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:05.125032 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:05.125038 | orchestrator | 2026-04-07 04:39:05.125045 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-07 04:39:05.125060 | orchestrator | Tuesday 07 April 2026 04:39:05 +0000 (0:00:03.161) 0:00:36.298 ********* 2026-04-07 04:39:14.057929 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058072 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058083 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058091 | orchestrator | 2026-04-07 04:39:14.058099 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-07 04:39:14.058107 | orchestrator | Tuesday 07 April 2026 04:39:05 +0000 (0:00:00.535) 0:00:36.833 ********* 2026-04-07 04:39:14.058114 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058121 | orchestrator | 2026-04-07 04:39:14.058128 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-07 04:39:14.058135 | orchestrator | Tuesday 07 April 2026 04:39:05 +0000 (0:00:00.111) 0:00:36.945 ********* 2026-04-07 04:39:14.058142 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058149 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058155 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058162 | orchestrator | 2026-04-07 04:39:14.058169 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-07 04:39:14.058176 | orchestrator | Tuesday 07 April 2026 04:39:06 +0000 (0:00:00.325) 0:00:37.271 ********* 2026-04-07 04:39:14.058182 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058189 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058196 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058202 | orchestrator | 2026-04-07 04:39:14.058209 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-07 04:39:14.058227 | orchestrator | Tuesday 07 April 2026 04:39:06 +0000 (0:00:00.346) 0:00:37.618 ********* 2026-04-07 04:39:14.058242 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058249 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058256 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058263 | orchestrator | 2026-04-07 04:39:14.058270 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-07 04:39:14.058277 | orchestrator | Tuesday 07 April 2026 04:39:06 +0000 (0:00:00.530) 0:00:38.148 ********* 2026-04-07 04:39:14.058283 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058290 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058297 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058303 | orchestrator | 2026-04-07 04:39:14.058310 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-07 04:39:14.058317 | orchestrator | Tuesday 07 April 2026 04:39:07 +0000 (0:00:00.387) 0:00:38.536 ********* 2026-04-07 04:39:14.058323 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058330 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058337 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058361 | orchestrator | 2026-04-07 04:39:14.058369 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-07 04:39:14.058375 | orchestrator | Tuesday 07 April 2026 04:39:07 +0000 (0:00:00.331) 0:00:38.868 ********* 2026-04-07 04:39:14.058382 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058389 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058395 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058402 | orchestrator | 2026-04-07 04:39:14.058409 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-07 04:39:14.058415 | orchestrator | Tuesday 07 April 2026 04:39:08 +0000 (0:00:00.360) 0:00:39.228 ********* 2026-04-07 04:39:14.058422 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 04:39:14.058429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 04:39:14.058435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 04:39:14.058442 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058448 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-07 04:39:14.058455 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-07 04:39:14.058462 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-07 04:39:14.058468 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058475 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-07 04:39:14.058481 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-07 04:39:14.058489 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-07 04:39:14.058498 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058505 | orchestrator | 2026-04-07 04:39:14.058513 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-07 04:39:14.058521 | orchestrator | Tuesday 07 April 2026 04:39:08 +0000 (0:00:00.633) 0:00:39.861 ********* 2026-04-07 04:39:14.058529 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058537 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058545 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058553 | orchestrator | 2026-04-07 04:39:14.058561 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-07 04:39:14.058580 | orchestrator | Tuesday 07 April 2026 04:39:09 +0000 (0:00:00.361) 0:00:40.222 ********* 2026-04-07 04:39:14.058588 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058596 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058605 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058612 | orchestrator | 2026-04-07 04:39:14.058667 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-07 04:39:14.058677 | orchestrator | Tuesday 07 April 2026 04:39:09 +0000 (0:00:00.326) 0:00:40.549 ********* 2026-04-07 04:39:14.058684 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058690 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058697 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058704 | orchestrator | 2026-04-07 04:39:14.058710 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-07 04:39:14.058718 | orchestrator | Tuesday 07 April 2026 04:39:09 +0000 (0:00:00.500) 0:00:41.050 ********* 2026-04-07 04:39:14.058724 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058731 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058738 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058744 | orchestrator | 2026-04-07 04:39:14.058751 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-07 04:39:14.058772 | orchestrator | Tuesday 07 April 2026 04:39:10 +0000 (0:00:00.348) 0:00:41.399 ********* 2026-04-07 04:39:14.058779 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058786 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058792 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058799 | orchestrator | 2026-04-07 04:39:14.058806 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-07 04:39:14.058819 | orchestrator | Tuesday 07 April 2026 04:39:10 +0000 (0:00:00.351) 0:00:41.750 ********* 2026-04-07 04:39:14.058826 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058833 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058839 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058846 | orchestrator | 2026-04-07 04:39:14.058853 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-07 04:39:14.058859 | orchestrator | Tuesday 07 April 2026 04:39:10 +0000 (0:00:00.360) 0:00:42.111 ********* 2026-04-07 04:39:14.058866 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058873 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058879 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058886 | orchestrator | 2026-04-07 04:39:14.058893 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-07 04:39:14.058899 | orchestrator | Tuesday 07 April 2026 04:39:11 +0000 (0:00:00.545) 0:00:42.657 ********* 2026-04-07 04:39:14.058906 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058912 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:14.058919 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:14.058926 | orchestrator | 2026-04-07 04:39:14.058932 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-07 04:39:14.058939 | orchestrator | Tuesday 07 April 2026 04:39:11 +0000 (0:00:00.328) 0:00:42.985 ********* 2026-04-07 04:39:14.058951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:14.058962 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:14.058981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:17.131465 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:17.131570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:17.131591 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:17.131604 | orchestrator | 2026-04-07 04:39:17.131617 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-07 04:39:17.131698 | orchestrator | Tuesday 07 April 2026 04:39:14 +0000 (0:00:02.479) 0:00:45.465 ********* 2026-04-07 04:39:17.131712 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:17.131723 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:17.131734 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:17.131746 | orchestrator | 2026-04-07 04:39:17.131766 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-07 04:39:17.131784 | orchestrator | Tuesday 07 April 2026 04:39:14 +0000 (0:00:00.347) 0:00:45.812 ********* 2026-04-07 04:39:17.131847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:17.131901 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:39:17.131915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:17.131928 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:39:17.131940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-07 04:39:17.131963 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:39:17.131981 | orchestrator | 2026-04-07 04:39:17.131996 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-07 04:39:17.132010 | orchestrator | Tuesday 07 April 2026 04:39:17 +0000 (0:00:02.385) 0:00:48.198 ********* 2026-04-07 04:39:17.132031 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.822877 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.823012 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.823029 | orchestrator | 2026-04-07 04:41:15.823042 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-07 04:41:15.823055 | orchestrator | Tuesday 07 April 2026 04:39:17 +0000 (0:00:00.711) 0:00:48.909 ********* 2026-04-07 04:41:15.823067 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.823078 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.823090 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.823101 | orchestrator | 2026-04-07 04:41:15.823113 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-07 04:41:15.823124 | orchestrator | Tuesday 07 April 2026 04:39:18 +0000 (0:00:00.353) 0:00:49.263 ********* 2026-04-07 04:41:15.823135 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.823146 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.823157 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.823168 | orchestrator | 2026-04-07 04:41:15.823222 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-07 04:41:15.823235 | orchestrator | Tuesday 07 April 2026 04:39:18 +0000 (0:00:00.327) 0:00:49.590 ********* 2026-04-07 04:41:15.823246 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.823257 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.823268 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.823279 | orchestrator | 2026-04-07 04:41:15.823290 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-07 04:41:15.823301 | orchestrator | Tuesday 07 April 2026 04:39:19 +0000 (0:00:01.149) 0:00:50.740 ********* 2026-04-07 04:41:15.823312 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.823323 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.823334 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.823345 | orchestrator | 2026-04-07 04:41:15.823356 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-07 04:41:15.823366 | orchestrator | Tuesday 07 April 2026 04:39:20 +0000 (0:00:00.723) 0:00:51.464 ********* 2026-04-07 04:41:15.823377 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.823428 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.823441 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.823451 | orchestrator | 2026-04-07 04:41:15.823462 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-07 04:41:15.823473 | orchestrator | Tuesday 07 April 2026 04:39:21 +0000 (0:00:01.033) 0:00:52.497 ********* 2026-04-07 04:41:15.823484 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.823495 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.823506 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.823516 | orchestrator | 2026-04-07 04:41:15.823527 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-07 04:41:15.823538 | orchestrator | Tuesday 07 April 2026 04:39:21 +0000 (0:00:00.352) 0:00:52.850 ********* 2026-04-07 04:41:15.823549 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.823559 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.823570 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.823581 | orchestrator | 2026-04-07 04:41:15.823592 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-07 04:41:15.823602 | orchestrator | Tuesday 07 April 2026 04:39:22 +0000 (0:00:00.373) 0:00:53.223 ********* 2026-04-07 04:41:15.823613 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.823624 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.823634 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.823645 | orchestrator | 2026-04-07 04:41:15.823656 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-07 04:41:15.823667 | orchestrator | Tuesday 07 April 2026 04:39:22 +0000 (0:00:00.769) 0:00:53.993 ********* 2026-04-07 04:41:15.823677 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.823688 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.823699 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.823709 | orchestrator | 2026-04-07 04:41:15.823725 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-07 04:41:15.823737 | orchestrator | Tuesday 07 April 2026 04:39:23 +0000 (0:00:00.552) 0:00:54.545 ********* 2026-04-07 04:41:15.823748 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.823759 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.823770 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.823781 | orchestrator | 2026-04-07 04:41:15.823791 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-07 04:41:15.823802 | orchestrator | Tuesday 07 April 2026 04:39:23 +0000 (0:00:00.337) 0:00:54.883 ********* 2026-04-07 04:41:15.823835 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.823846 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.823857 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.823868 | orchestrator | 2026-04-07 04:41:15.823879 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-07 04:41:15.823890 | orchestrator | Tuesday 07 April 2026 04:39:26 +0000 (0:00:02.411) 0:00:57.294 ********* 2026-04-07 04:41:15.823901 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.823911 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.823922 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.823933 | orchestrator | 2026-04-07 04:41:15.823944 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-07 04:41:15.823954 | orchestrator | Tuesday 07 April 2026 04:39:26 +0000 (0:00:00.370) 0:00:57.665 ********* 2026-04-07 04:41:15.823965 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.823976 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.823987 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.823998 | orchestrator | 2026-04-07 04:41:15.824009 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-07 04:41:15.824020 | orchestrator | Tuesday 07 April 2026 04:39:27 +0000 (0:00:00.555) 0:00:58.221 ********* 2026-04-07 04:41:15.824030 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.824041 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.824052 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.824082 | orchestrator | 2026-04-07 04:41:15.824102 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 04:41:15.824120 | orchestrator | Tuesday 07 April 2026 04:39:27 +0000 (0:00:00.783) 0:00:59.004 ********* 2026-04-07 04:41:15.824139 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.824157 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.824172 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.824200 | orchestrator | 2026-04-07 04:41:15.824211 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-07 04:41:15.824222 | orchestrator | Tuesday 07 April 2026 04:39:28 +0000 (0:00:00.317) 0:00:59.322 ********* 2026-04-07 04:41:15.824233 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.824244 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.824255 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.824265 | orchestrator | 2026-04-07 04:41:15.824276 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-07 04:41:15.824287 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-07 04:41:15.824298 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-07 04:41:15.824319 | orchestrator | Tuesday 07 April 2026 04:39:29 +0000 (0:00:00.968) 0:01:00.290 ********* 2026-04-07 04:41:15.824330 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:41:15.824341 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:41:15.824352 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:41:15.824363 | orchestrator | 2026-04-07 04:41:15.824374 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-07 04:41:15.824384 | orchestrator | Tuesday 07 April 2026 04:39:29 +0000 (0:00:00.377) 0:01:00.667 ********* 2026-04-07 04:41:15.824395 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:15.824406 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:15.824417 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:15.824427 | orchestrator | 2026-04-07 04:41:15.824438 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-07 04:41:15.824449 | orchestrator | 2026-04-07 04:41:15.824460 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 04:41:15.824471 | orchestrator | Tuesday 07 April 2026 04:39:30 +0000 (0:00:00.947) 0:01:01.615 ********* 2026-04-07 04:41:15.824481 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:41:15.824492 | orchestrator | 2026-04-07 04:41:15.824503 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 04:41:15.824514 | orchestrator | Tuesday 07 April 2026 04:39:55 +0000 (0:00:25.157) 0:01:26.773 ********* 2026-04-07 04:41:15.824525 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.824535 | orchestrator | 2026-04-07 04:41:15.824546 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 04:41:15.824557 | orchestrator | Tuesday 07 April 2026 04:40:01 +0000 (0:00:05.594) 0:01:32.367 ********* 2026-04-07 04:41:15.824568 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:15.824579 | orchestrator | 2026-04-07 04:41:15.824589 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-07 04:41:15.824664 | orchestrator | 2026-04-07 04:41:15.824677 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 04:41:15.824688 | orchestrator | Tuesday 07 April 2026 04:40:03 +0000 (0:00:02.662) 0:01:35.030 ********* 2026-04-07 04:41:15.824699 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:41:15.824710 | orchestrator | 2026-04-07 04:41:15.824721 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 04:41:15.824732 | orchestrator | Tuesday 07 April 2026 04:40:28 +0000 (0:00:25.004) 0:02:00.034 ********* 2026-04-07 04:41:15.824743 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-04-07 04:41:15.824756 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.824775 | orchestrator | 2026-04-07 04:41:15.824786 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 04:41:15.824797 | orchestrator | Tuesday 07 April 2026 04:40:37 +0000 (0:00:08.202) 0:02:08.237 ********* 2026-04-07 04:41:15.824843 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:15.824856 | orchestrator | 2026-04-07 04:41:15.824867 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-07 04:41:15.824878 | orchestrator | 2026-04-07 04:41:15.824889 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-07 04:41:15.824900 | orchestrator | Tuesday 07 April 2026 04:40:39 +0000 (0:00:02.516) 0:02:10.753 ********* 2026-04-07 04:41:15.824911 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:41:15.824922 | orchestrator | 2026-04-07 04:41:15.824933 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-07 04:41:15.824943 | orchestrator | Tuesday 07 April 2026 04:41:06 +0000 (0:00:27.090) 0:02:37.843 ********* 2026-04-07 04:41:15.824954 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.824965 | orchestrator | 2026-04-07 04:41:15.824976 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-07 04:41:15.824987 | orchestrator | Tuesday 07 April 2026 04:41:11 +0000 (0:00:04.954) 0:02:42.798 ********* 2026-04-07 04:41:15.824997 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:15.825008 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-07 04:41:15.825019 | orchestrator | 2026-04-07 04:41:15.825030 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-07 04:41:15.825041 | orchestrator | skipping: no hosts matched 2026-04-07 04:41:15.825052 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-07 04:41:15.825062 | orchestrator | mariadb_bootstrap_restart 2026-04-07 04:41:15.825073 | orchestrator | 2026-04-07 04:41:15.825084 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-07 04:41:15.825095 | orchestrator | skipping: no hosts matched 2026-04-07 04:41:15.825106 | orchestrator | 2026-04-07 04:41:15.825116 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-07 04:41:15.825127 | orchestrator | 2026-04-07 04:41:15.825138 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-07 04:41:15.825149 | orchestrator | Tuesday 07 April 2026 04:41:14 +0000 (0:00:03.189) 0:02:45.987 ********* 2026-04-07 04:41:15.825160 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:41:15.825171 | orchestrator | 2026-04-07 04:41:15.825182 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-07 04:41:15.825201 | orchestrator | Tuesday 07 April 2026 04:41:15 +0000 (0:00:01.003) 0:02:46.991 ********* 2026-04-07 04:41:55.442517 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:55.442651 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:55.442669 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:55.442701 | orchestrator | 2026-04-07 04:41:55.442755 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-07 04:41:55.442770 | orchestrator | Tuesday 07 April 2026 04:41:18 +0000 (0:00:02.328) 0:02:49.320 ********* 2026-04-07 04:41:55.442781 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:55.442793 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:55.442804 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:41:55.442815 | orchestrator | 2026-04-07 04:41:55.442827 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-07 04:41:55.442838 | orchestrator | Tuesday 07 April 2026 04:41:20 +0000 (0:00:02.255) 0:02:51.576 ********* 2026-04-07 04:41:55.442849 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:55.442860 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:55.442871 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:55.442882 | orchestrator | 2026-04-07 04:41:55.442893 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-07 04:41:55.442977 | orchestrator | Tuesday 07 April 2026 04:41:22 +0000 (0:00:02.252) 0:02:53.829 ********* 2026-04-07 04:41:55.442991 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:55.443001 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:55.443012 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:41:55.443023 | orchestrator | 2026-04-07 04:41:55.443034 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-07 04:41:55.443045 | orchestrator | Tuesday 07 April 2026 04:41:24 +0000 (0:00:02.178) 0:02:56.007 ********* 2026-04-07 04:41:55.443057 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:55.443069 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:55.443082 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:55.443094 | orchestrator | 2026-04-07 04:41:55.443106 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-07 04:41:55.443119 | orchestrator | Tuesday 07 April 2026 04:41:30 +0000 (0:00:05.700) 0:03:01.708 ********* 2026-04-07 04:41:55.443132 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:55.443145 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:55.443158 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:55.443171 | orchestrator | 2026-04-07 04:41:55.443184 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-07 04:41:55.443197 | orchestrator | Tuesday 07 April 2026 04:41:32 +0000 (0:00:02.394) 0:03:04.102 ********* 2026-04-07 04:41:55.443209 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:41:55.443222 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:41:55.443234 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:41:55.443246 | orchestrator | 2026-04-07 04:41:55.443259 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-07 04:41:55.443272 | orchestrator | Tuesday 07 April 2026 04:41:33 +0000 (0:00:00.630) 0:03:04.733 ********* 2026-04-07 04:41:55.443284 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:41:55.443297 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:41:55.443310 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:41:55.443323 | orchestrator | 2026-04-07 04:41:55.443336 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-07 04:41:55.443348 | orchestrator | Tuesday 07 April 2026 04:41:36 +0000 (0:00:02.952) 0:03:07.686 ********* 2026-04-07 04:41:55.443361 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:41:55.443374 | orchestrator | 2026-04-07 04:41:55.443387 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-04-07 04:41:55.443399 | orchestrator | Tuesday 07 April 2026 04:41:37 +0000 (0:00:01.149) 0:03:08.835 ********* 2026-04-07 04:41:55.443412 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:41:55.443423 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:41:55.443448 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:41:55.443459 | orchestrator | 2026-04-07 04:41:55.443470 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:41:55.443482 | orchestrator | testbed-node-0 : ok=35  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-07 04:41:55.443496 | orchestrator | testbed-node-1 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-07 04:41:55.443507 | orchestrator | testbed-node-2 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-07 04:41:55.443518 | orchestrator | 2026-04-07 04:41:55.443529 | orchestrator | 2026-04-07 04:41:55.443540 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:41:55.443551 | orchestrator | Tuesday 07 April 2026 04:41:54 +0000 (0:00:17.270) 0:03:26.106 ********* 2026-04-07 04:41:55.443561 | orchestrator | =============================================================================== 2026-04-07 04:41:55.443572 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 77.25s 2026-04-07 04:41:55.443592 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 18.75s 2026-04-07 04:41:55.443603 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.27s 2026-04-07 04:41:55.443614 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 8.37s 2026-04-07 04:41:55.443624 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.70s 2026-04-07 04:41:55.443635 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.90s 2026-04-07 04:41:55.443646 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.34s 2026-04-07 04:41:55.443657 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.28s 2026-04-07 04:41:55.443668 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.16s 2026-04-07 04:41:55.443697 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.03s 2026-04-07 04:41:55.443709 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.95s 2026-04-07 04:41:55.443720 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.73s 2026-04-07 04:41:55.443730 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.64s 2026-04-07 04:41:55.443748 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.48s 2026-04-07 04:41:55.443768 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 2.41s 2026-04-07 04:41:55.443787 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.39s 2026-04-07 04:41:55.443806 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.39s 2026-04-07 04:41:55.443828 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.33s 2026-04-07 04:41:55.443848 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.33s 2026-04-07 04:41:55.443866 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.26s 2026-04-07 04:41:55.623513 | orchestrator | + osism apply -a upgrade rabbitmq 2026-04-07 04:41:56.959480 | orchestrator | 2026-04-07 04:41:56 | INFO  | Prepare task for execution of rabbitmq. 2026-04-07 04:41:57.032074 | orchestrator | 2026-04-07 04:41:57 | INFO  | Task 089145c6-5ce5-496e-ad08-17b8460de849 (rabbitmq) was prepared for execution. 2026-04-07 04:41:57.032175 | orchestrator | 2026-04-07 04:41:57 | INFO  | It takes a moment until task 089145c6-5ce5-496e-ad08-17b8460de849 (rabbitmq) has been started and output is visible here. 2026-04-07 04:42:40.651100 | orchestrator | 2026-04-07 04:42:40.651297 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 04:42:40.651326 | orchestrator | 2026-04-07 04:42:40.651346 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 04:42:40.651364 | orchestrator | Tuesday 07 April 2026 04:42:02 +0000 (0:00:01.825) 0:00:01.825 ********* 2026-04-07 04:42:40.651385 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:42:40.651406 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:42:40.651426 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:42:40.651446 | orchestrator | 2026-04-07 04:42:40.651462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 04:42:40.651473 | orchestrator | Tuesday 07 April 2026 04:42:03 +0000 (0:00:01.745) 0:00:03.571 ********* 2026-04-07 04:42:40.651485 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-07 04:42:40.651496 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-07 04:42:40.651507 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-07 04:42:40.651518 | orchestrator | 2026-04-07 04:42:40.651530 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-07 04:42:40.651541 | orchestrator | 2026-04-07 04:42:40.651552 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 04:42:40.651563 | orchestrator | Tuesday 07 April 2026 04:42:06 +0000 (0:00:02.785) 0:00:06.356 ********* 2026-04-07 04:42:40.651602 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:42:40.651617 | orchestrator | 2026-04-07 04:42:40.651630 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-07 04:42:40.651659 | orchestrator | Tuesday 07 April 2026 04:42:09 +0000 (0:00:03.065) 0:00:09.422 ********* 2026-04-07 04:42:40.651672 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:42:40.651690 | orchestrator | 2026-04-07 04:42:40.651708 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-07 04:42:40.651727 | orchestrator | Tuesday 07 April 2026 04:42:12 +0000 (0:00:02.763) 0:00:12.185 ********* 2026-04-07 04:42:40.651746 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:42:40.651765 | orchestrator | 2026-04-07 04:42:40.651784 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-07 04:42:40.651798 | orchestrator | Tuesday 07 April 2026 04:42:15 +0000 (0:00:03.170) 0:00:15.356 ********* 2026-04-07 04:42:40.651809 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:42:40.651821 | orchestrator | 2026-04-07 04:42:40.651832 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-07 04:42:40.651843 | orchestrator | Tuesday 07 April 2026 04:42:25 +0000 (0:00:09.437) 0:00:24.793 ********* 2026-04-07 04:42:40.651853 | orchestrator | ok: [testbed-node-0] => { 2026-04-07 04:42:40.651864 | orchestrator |  "changed": false, 2026-04-07 04:42:40.651875 | orchestrator |  "msg": "All assertions passed" 2026-04-07 04:42:40.651886 | orchestrator | } 2026-04-07 04:42:40.651897 | orchestrator | 2026-04-07 04:42:40.651908 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-07 04:42:40.651918 | orchestrator | Tuesday 07 April 2026 04:42:26 +0000 (0:00:01.352) 0:00:26.146 ********* 2026-04-07 04:42:40.651929 | orchestrator | ok: [testbed-node-0] => { 2026-04-07 04:42:40.651940 | orchestrator |  "changed": false, 2026-04-07 04:42:40.651950 | orchestrator |  "msg": "All assertions passed" 2026-04-07 04:42:40.651993 | orchestrator | } 2026-04-07 04:42:40.652015 | orchestrator | 2026-04-07 04:42:40.652033 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 04:42:40.652051 | orchestrator | Tuesday 07 April 2026 04:42:28 +0000 (0:00:01.701) 0:00:27.848 ********* 2026-04-07 04:42:40.652071 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:42:40.652091 | orchestrator | 2026-04-07 04:42:40.652109 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-07 04:42:40.652128 | orchestrator | Tuesday 07 April 2026 04:42:30 +0000 (0:00:01.923) 0:00:29.772 ********* 2026-04-07 04:42:40.652146 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:42:40.652163 | orchestrator | 2026-04-07 04:42:40.652174 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-07 04:42:40.652185 | orchestrator | Tuesday 07 April 2026 04:42:32 +0000 (0:00:02.440) 0:00:32.213 ********* 2026-04-07 04:42:40.652195 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:42:40.652206 | orchestrator | 2026-04-07 04:42:40.652217 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-07 04:42:40.652228 | orchestrator | Tuesday 07 April 2026 04:42:35 +0000 (0:00:03.047) 0:00:35.260 ********* 2026-04-07 04:42:40.652238 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:42:40.652249 | orchestrator | 2026-04-07 04:42:40.652260 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-07 04:42:40.652271 | orchestrator | Tuesday 07 April 2026 04:42:37 +0000 (0:00:01.700) 0:00:36.961 ********* 2026-04-07 04:42:40.652313 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:42:40.652350 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:42:40.652363 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:42:40.652376 | orchestrator | 2026-04-07 04:42:40.652387 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-07 04:42:40.652398 | orchestrator | Tuesday 07 April 2026 04:42:39 +0000 (0:00:02.033) 0:00:38.995 ********* 2026-04-07 04:42:40.652410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:42:40.652438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:43:00.802095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:43:00.802224 | orchestrator | 2026-04-07 04:43:00.802239 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-07 04:43:00.802250 | orchestrator | Tuesday 07 April 2026 04:42:41 +0000 (0:00:02.413) 0:00:41.409 ********* 2026-04-07 04:43:00.802258 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 04:43:00.802267 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 04:43:00.802276 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-07 04:43:00.802291 | orchestrator | 2026-04-07 04:43:00.802305 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-07 04:43:00.802319 | orchestrator | Tuesday 07 April 2026 04:42:44 +0000 (0:00:02.478) 0:00:43.888 ********* 2026-04-07 04:43:00.802333 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 04:43:00.802346 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 04:43:00.802359 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-07 04:43:00.802373 | orchestrator | 2026-04-07 04:43:00.802387 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-07 04:43:00.802401 | orchestrator | Tuesday 07 April 2026 04:42:46 +0000 (0:00:02.737) 0:00:46.625 ********* 2026-04-07 04:43:00.802414 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 04:43:00.802423 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 04:43:00.802451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-07 04:43:00.802460 | orchestrator | 2026-04-07 04:43:00.802468 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-07 04:43:00.802476 | orchestrator | Tuesday 07 April 2026 04:42:49 +0000 (0:00:02.372) 0:00:48.998 ********* 2026-04-07 04:43:00.802484 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 04:43:00.802492 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 04:43:00.802500 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-07 04:43:00.802507 | orchestrator | 2026-04-07 04:43:00.802515 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-07 04:43:00.802523 | orchestrator | Tuesday 07 April 2026 04:42:51 +0000 (0:00:02.546) 0:00:51.544 ********* 2026-04-07 04:43:00.802531 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 04:43:00.802541 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 04:43:00.802551 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-07 04:43:00.802560 | orchestrator | 2026-04-07 04:43:00.802569 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-07 04:43:00.802578 | orchestrator | Tuesday 07 April 2026 04:42:54 +0000 (0:00:02.293) 0:00:53.838 ********* 2026-04-07 04:43:00.802588 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 04:43:00.802598 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 04:43:00.802608 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-07 04:43:00.802617 | orchestrator | 2026-04-07 04:43:00.802627 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-07 04:43:00.802636 | orchestrator | Tuesday 07 April 2026 04:42:56 +0000 (0:00:02.270) 0:00:56.109 ********* 2026-04-07 04:43:00.802646 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:43:00.802655 | orchestrator | 2026-04-07 04:43:00.802681 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-07 04:43:00.802691 | orchestrator | Tuesday 07 April 2026 04:42:58 +0000 (0:00:01.877) 0:00:57.986 ********* 2026-04-07 04:43:00.802710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:43:00.802721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:43:00.802739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:43:00.802749 | orchestrator | 2026-04-07 04:43:00.802759 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-07 04:43:00.802768 | orchestrator | Tuesday 07 April 2026 04:43:00 +0000 (0:00:02.319) 0:01:00.306 ********* 2026-04-07 04:43:00.802785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:43:09.200622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:43:09.200756 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:43:09.200776 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:43:09.200790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:43:09.200803 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:43:09.200815 | orchestrator | 2026-04-07 04:43:09.200827 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-07 04:43:09.200839 | orchestrator | Tuesday 07 April 2026 04:43:02 +0000 (0:00:01.386) 0:01:01.692 ********* 2026-04-07 04:43:09.200852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:43:09.200864 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:43:09.200901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:43:09.200923 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:43:09.200935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:43:09.200947 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:43:09.200958 | orchestrator | 2026-04-07 04:43:09.200970 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-07 04:43:09.200981 | orchestrator | Tuesday 07 April 2026 04:43:04 +0000 (0:00:01.993) 0:01:03.686 ********* 2026-04-07 04:43:09.200993 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:43:09.201005 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:43:09.201043 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:43:09.201055 | orchestrator | 2026-04-07 04:43:09.201066 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-07 04:43:09.201077 | orchestrator | Tuesday 07 April 2026 04:43:08 +0000 (0:00:04.117) 0:01:07.803 ********* 2026-04-07 04:43:09.201089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:43:09.201116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:44:50.919474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-07 04:44:50.919622 | orchestrator | 2026-04-07 04:44:50.919645 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-07 04:44:50.919658 | orchestrator | Tuesday 07 April 2026 04:43:10 +0000 (0:00:02.198) 0:01:10.002 ********* 2026-04-07 04:44:50.919671 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:44:50.919684 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:44:50.919695 | orchestrator | } 2026-04-07 04:44:50.919706 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:44:50.919717 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:44:50.919728 | orchestrator | } 2026-04-07 04:44:50.919739 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:44:50.919750 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:44:50.919760 | orchestrator | } 2026-04-07 04:44:50.919771 | orchestrator | 2026-04-07 04:44:50.919783 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:44:50.919794 | orchestrator | Tuesday 07 April 2026 04:43:11 +0000 (0:00:01.628) 0:01:11.630 ********* 2026-04-07 04:44:50.919807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:44:50.919821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:44:50.919861 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:44:50.919873 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:44:50.919925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-07 04:44:50.919948 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:44:50.919967 | orchestrator | 2026-04-07 04:44:50.919985 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-07 04:44:50.920002 | orchestrator | Tuesday 07 April 2026 04:43:14 +0000 (0:00:02.021) 0:01:13.652 ********* 2026-04-07 04:44:50.920019 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:44:50.920038 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:44:50.920057 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:44:50.920074 | orchestrator | 2026-04-07 04:44:50.920091 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 04:44:50.920108 | orchestrator | 2026-04-07 04:44:50.920126 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 04:44:50.920174 | orchestrator | Tuesday 07 April 2026 04:43:15 +0000 (0:00:01.703) 0:01:15.356 ********* 2026-04-07 04:44:50.920194 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:44:50.920214 | orchestrator | 2026-04-07 04:44:50.920233 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 04:44:50.920250 | orchestrator | Tuesday 07 April 2026 04:43:17 +0000 (0:00:02.119) 0:01:17.475 ********* 2026-04-07 04:44:50.920267 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:44:50.920285 | orchestrator | 2026-04-07 04:44:50.920304 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 04:44:50.920325 | orchestrator | Tuesday 07 April 2026 04:43:26 +0000 (0:00:08.964) 0:01:26.440 ********* 2026-04-07 04:44:50.920343 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:44:50.920361 | orchestrator | 2026-04-07 04:44:50.920379 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 04:44:50.920401 | orchestrator | Tuesday 07 April 2026 04:43:35 +0000 (0:00:09.153) 0:01:35.594 ********* 2026-04-07 04:44:50.920419 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:44:50.920438 | orchestrator | 2026-04-07 04:44:50.920460 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 04:44:50.920480 | orchestrator | 2026-04-07 04:44:50.920498 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 04:44:50.920516 | orchestrator | Tuesday 07 April 2026 04:43:45 +0000 (0:00:09.289) 0:01:44.884 ********* 2026-04-07 04:44:50.920533 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:44:50.920550 | orchestrator | 2026-04-07 04:44:50.920568 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 04:44:50.920609 | orchestrator | Tuesday 07 April 2026 04:43:46 +0000 (0:00:01.695) 0:01:46.579 ********* 2026-04-07 04:44:50.920628 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:44:50.920647 | orchestrator | 2026-04-07 04:44:50.920666 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 04:44:50.920685 | orchestrator | Tuesday 07 April 2026 04:43:55 +0000 (0:00:08.243) 0:01:54.822 ********* 2026-04-07 04:44:50.920704 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:44:50.920723 | orchestrator | 2026-04-07 04:44:50.920741 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 04:44:50.920761 | orchestrator | Tuesday 07 April 2026 04:44:08 +0000 (0:00:13.313) 0:02:08.136 ********* 2026-04-07 04:44:50.920779 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:44:50.920798 | orchestrator | 2026-04-07 04:44:50.920816 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-07 04:44:50.920835 | orchestrator | 2026-04-07 04:44:50.920855 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-07 04:44:50.920874 | orchestrator | Tuesday 07 April 2026 04:44:18 +0000 (0:00:09.525) 0:02:17.661 ********* 2026-04-07 04:44:50.920892 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:44:50.920911 | orchestrator | 2026-04-07 04:44:50.920930 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-07 04:44:50.920949 | orchestrator | Tuesday 07 April 2026 04:44:19 +0000 (0:00:01.726) 0:02:19.388 ********* 2026-04-07 04:44:50.920968 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:44:50.920987 | orchestrator | 2026-04-07 04:44:50.921006 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-07 04:44:50.921022 | orchestrator | Tuesday 07 April 2026 04:44:28 +0000 (0:00:08.378) 0:02:27.767 ********* 2026-04-07 04:44:50.921040 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:44:50.921059 | orchestrator | 2026-04-07 04:44:50.921077 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-07 04:44:50.921096 | orchestrator | Tuesday 07 April 2026 04:44:41 +0000 (0:00:13.523) 0:02:41.290 ********* 2026-04-07 04:44:50.921115 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:44:50.921134 | orchestrator | 2026-04-07 04:44:50.921179 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-07 04:44:50.921198 | orchestrator | 2026-04-07 04:44:50.921227 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-07 04:44:50.921261 | orchestrator | Tuesday 07 April 2026 04:44:50 +0000 (0:00:09.261) 0:02:50.552 ********* 2026-04-07 04:44:57.239046 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:44:57.239190 | orchestrator | 2026-04-07 04:44:57.239209 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-07 04:44:57.239222 | orchestrator | Tuesday 07 April 2026 04:44:52 +0000 (0:00:01.518) 0:02:52.070 ********* 2026-04-07 04:44:57.239233 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:44:57.239246 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:44:57.239257 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:44:57.239268 | orchestrator | 2026-04-07 04:44:57.239279 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:44:57.239292 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 04:44:57.239304 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 04:44:57.239315 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-07 04:44:57.239327 | orchestrator | 2026-04-07 04:44:57.239339 | orchestrator | 2026-04-07 04:44:57.239350 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:44:57.239361 | orchestrator | Tuesday 07 April 2026 04:44:56 +0000 (0:00:04.381) 0:02:56.452 ********* 2026-04-07 04:44:57.239395 | orchestrator | =============================================================================== 2026-04-07 04:44:57.239406 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 35.99s 2026-04-07 04:44:57.239417 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 28.08s 2026-04-07 04:44:57.239428 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 25.59s 2026-04-07 04:44:57.239439 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.44s 2026-04-07 04:44:57.239450 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.54s 2026-04-07 04:44:57.239460 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.38s 2026-04-07 04:44:57.239471 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.12s 2026-04-07 04:44:57.239482 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.17s 2026-04-07 04:44:57.239493 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.07s 2026-04-07 04:44:57.239503 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.05s 2026-04-07 04:44:57.239514 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.79s 2026-04-07 04:44:57.239525 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.76s 2026-04-07 04:44:57.239535 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.74s 2026-04-07 04:44:57.239546 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.55s 2026-04-07 04:44:57.239557 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.48s 2026-04-07 04:44:57.239568 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.44s 2026-04-07 04:44:57.239579 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.41s 2026-04-07 04:44:57.239589 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.37s 2026-04-07 04:44:57.239604 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.32s 2026-04-07 04:44:57.239624 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.29s 2026-04-07 04:44:57.431605 | orchestrator | + osism apply -a upgrade openvswitch 2026-04-07 04:44:58.812861 | orchestrator | 2026-04-07 04:44:58 | INFO  | Prepare task for execution of openvswitch. 2026-04-07 04:44:58.878259 | orchestrator | 2026-04-07 04:44:58 | INFO  | Task f7461ba5-1dfa-4bf4-8d5f-0ce0089db968 (openvswitch) was prepared for execution. 2026-04-07 04:44:58.878347 | orchestrator | 2026-04-07 04:44:58 | INFO  | It takes a moment until task f7461ba5-1dfa-4bf4-8d5f-0ce0089db968 (openvswitch) has been started and output is visible here. 2026-04-07 04:45:24.251594 | orchestrator | 2026-04-07 04:45:24.251707 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 04:45:24.251731 | orchestrator | 2026-04-07 04:45:24.251750 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 04:45:24.251768 | orchestrator | Tuesday 07 April 2026 04:45:03 +0000 (0:00:01.746) 0:00:01.747 ********* 2026-04-07 04:45:24.251780 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:45:24.251790 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:45:24.251799 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:45:24.251808 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:45:24.251816 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:45:24.251825 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:45:24.251834 | orchestrator | 2026-04-07 04:45:24.251843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 04:45:24.251852 | orchestrator | Tuesday 07 April 2026 04:45:06 +0000 (0:00:02.570) 0:00:04.317 ********* 2026-04-07 04:45:24.251861 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 04:45:24.251883 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 04:45:24.251914 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 04:45:24.251923 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 04:45:24.251932 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 04:45:24.251940 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-07 04:45:24.251949 | orchestrator | 2026-04-07 04:45:24.251957 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-07 04:45:24.251966 | orchestrator | 2026-04-07 04:45:24.251975 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-07 04:45:24.251984 | orchestrator | Tuesday 07 April 2026 04:45:09 +0000 (0:00:02.825) 0:00:07.143 ********* 2026-04-07 04:45:24.251994 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 04:45:24.252004 | orchestrator | 2026-04-07 04:45:24.252013 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-07 04:45:24.252021 | orchestrator | Tuesday 07 April 2026 04:45:13 +0000 (0:00:03.902) 0:00:11.046 ********* 2026-04-07 04:45:24.252030 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-07 04:45:24.252039 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-07 04:45:24.252048 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-07 04:45:24.252056 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-07 04:45:24.252065 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-07 04:45:24.252073 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-07 04:45:24.252082 | orchestrator | 2026-04-07 04:45:24.252090 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-07 04:45:24.252099 | orchestrator | Tuesday 07 April 2026 04:45:15 +0000 (0:00:02.537) 0:00:13.584 ********* 2026-04-07 04:45:24.252108 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-07 04:45:24.252116 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-07 04:45:24.252125 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-07 04:45:24.252133 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-07 04:45:24.252143 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-07 04:45:24.252153 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-07 04:45:24.252163 | orchestrator | 2026-04-07 04:45:24.252173 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-07 04:45:24.252216 | orchestrator | Tuesday 07 April 2026 04:45:18 +0000 (0:00:02.692) 0:00:16.276 ********* 2026-04-07 04:45:24.252226 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-07 04:45:24.252237 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:45:24.252247 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-07 04:45:24.252258 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:45:24.252269 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-07 04:45:24.252279 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:45:24.252289 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-07 04:45:24.252300 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:45:24.252311 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-07 04:45:24.252321 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:45:24.252332 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-07 04:45:24.252343 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:45:24.252353 | orchestrator | 2026-04-07 04:45:24.252363 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-07 04:45:24.252373 | orchestrator | Tuesday 07 April 2026 04:45:20 +0000 (0:00:02.499) 0:00:18.776 ********* 2026-04-07 04:45:24.252383 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:45:24.252400 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:45:24.252411 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:45:24.252421 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:45:24.252431 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:45:24.252441 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:45:24.252451 | orchestrator | 2026-04-07 04:45:24.252461 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-07 04:45:24.252472 | orchestrator | Tuesday 07 April 2026 04:45:23 +0000 (0:00:02.260) 0:00:21.037 ********* 2026-04-07 04:45:24.252503 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:24.252523 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:24.252533 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:24.252542 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:24.252552 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:24.252567 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:24.252588 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807709 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807822 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807839 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807852 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807884 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807897 | orchestrator | 2026-04-07 04:45:27.807910 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-07 04:45:27.807923 | orchestrator | Tuesday 07 April 2026 04:45:25 +0000 (0:00:02.727) 0:00:23.764 ********* 2026-04-07 04:45:27.807959 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807974 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807985 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:27.807997 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:27.808016 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:27.808028 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:27.808053 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:33.542078 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:33.542253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:33.542988 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:33.543013 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:33.543040 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:33.543053 | orchestrator | 2026-04-07 04:45:33.543067 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-07 04:45:33.543079 | orchestrator | Tuesday 07 April 2026 04:45:29 +0000 (0:00:03.665) 0:00:27.430 ********* 2026-04-07 04:45:33.543090 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:45:33.543102 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:45:33.543113 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:45:33.543124 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:45:33.543134 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:45:33.543145 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:45:33.543156 | orchestrator | 2026-04-07 04:45:33.543168 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-07 04:45:33.543230 | orchestrator | Tuesday 07 April 2026 04:45:31 +0000 (0:00:02.345) 0:00:29.775 ********* 2026-04-07 04:45:33.543243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:33.543269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:33.543281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:33.543293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:33.543310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:33.543332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-07 04:45:37.869964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:37.870153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:37.870170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:37.870182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:37.870246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:37.870276 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-07 04:45:37.870296 | orchestrator | 2026-04-07 04:45:37.870309 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-07 04:45:37.870320 | orchestrator | Tuesday 07 April 2026 04:45:35 +0000 (0:00:03.516) 0:00:33.292 ********* 2026-04-07 04:45:37.870331 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:45:37.870343 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:45:37.870353 | orchestrator | } 2026-04-07 04:45:37.870363 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:45:37.870373 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:45:37.870382 | orchestrator | } 2026-04-07 04:45:37.870392 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:45:37.870402 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:45:37.870411 | orchestrator | } 2026-04-07 04:45:37.870421 | orchestrator | changed: [testbed-node-3] => { 2026-04-07 04:45:37.870431 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:45:37.870440 | orchestrator | } 2026-04-07 04:45:37.870450 | orchestrator | changed: [testbed-node-4] => { 2026-04-07 04:45:37.870459 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:45:37.870469 | orchestrator | } 2026-04-07 04:45:37.870479 | orchestrator | changed: [testbed-node-5] => { 2026-04-07 04:45:37.870489 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:45:37.870499 | orchestrator | } 2026-04-07 04:45:37.870510 | orchestrator | 2026-04-07 04:45:37.870523 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:45:37.870535 | orchestrator | Tuesday 07 April 2026 04:45:37 +0000 (0:00:01.830) 0:00:35.123 ********* 2026-04-07 04:45:37.870548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-07 04:45:37.870561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-07 04:45:37.870573 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:45:37.870590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-07 04:45:37.870602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-07 04:45:37.870629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-07 04:46:13.032852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-07 04:46:13.032968 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:46:13.032986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-07 04:46:13.033000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-07 04:46:13.033013 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:46:13.033024 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:46:13.033053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-07 04:46:13.033087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-07 04:46:13.033099 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:46:13.033129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-07 04:46:13.033142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-07 04:46:13.033153 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:46:13.033165 | orchestrator | 2026-04-07 04:46:13.033177 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 04:46:13.033189 | orchestrator | Tuesday 07 April 2026 04:45:40 +0000 (0:00:02.814) 0:00:37.937 ********* 2026-04-07 04:46:13.033200 | orchestrator | 2026-04-07 04:46:13.033211 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 04:46:13.033222 | orchestrator | Tuesday 07 April 2026 04:45:40 +0000 (0:00:00.724) 0:00:38.661 ********* 2026-04-07 04:46:13.033233 | orchestrator | 2026-04-07 04:46:13.033330 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 04:46:13.033348 | orchestrator | Tuesday 07 April 2026 04:45:41 +0000 (0:00:00.529) 0:00:39.191 ********* 2026-04-07 04:46:13.033359 | orchestrator | 2026-04-07 04:46:13.033370 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 04:46:13.033381 | orchestrator | Tuesday 07 April 2026 04:45:41 +0000 (0:00:00.517) 0:00:39.709 ********* 2026-04-07 04:46:13.033394 | orchestrator | 2026-04-07 04:46:13.033407 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 04:46:13.033431 | orchestrator | Tuesday 07 April 2026 04:45:42 +0000 (0:00:00.530) 0:00:40.240 ********* 2026-04-07 04:46:13.033444 | orchestrator | 2026-04-07 04:46:13.033457 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-07 04:46:13.033469 | orchestrator | Tuesday 07 April 2026 04:45:42 +0000 (0:00:00.515) 0:00:40.756 ********* 2026-04-07 04:46:13.033481 | orchestrator | 2026-04-07 04:46:13.033496 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-07 04:46:13.033508 | orchestrator | Tuesday 07 April 2026 04:45:43 +0000 (0:00:00.906) 0:00:41.663 ********* 2026-04-07 04:46:13.033521 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:46:13.033534 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:46:13.033547 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:46:13.033560 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:46:13.033573 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:46:13.033593 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:46:13.033606 | orchestrator | 2026-04-07 04:46:13.033619 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-07 04:46:13.033633 | orchestrator | Tuesday 07 April 2026 04:45:56 +0000 (0:00:12.445) 0:00:54.109 ********* 2026-04-07 04:46:13.033646 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:46:13.033660 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:46:13.033672 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:46:13.033685 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:46:13.033698 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:46:13.033711 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:46:13.033723 | orchestrator | 2026-04-07 04:46:13.033736 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-07 04:46:13.033749 | orchestrator | Tuesday 07 April 2026 04:45:58 +0000 (0:00:02.426) 0:00:56.535 ********* 2026-04-07 04:46:13.033760 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:46:13.033770 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:46:13.033781 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:46:13.033792 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:46:13.033802 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:46:13.033813 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:46:13.033823 | orchestrator | 2026-04-07 04:46:13.033834 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-07 04:46:13.033845 | orchestrator | Tuesday 07 April 2026 04:46:10 +0000 (0:00:11.494) 0:01:08.029 ********* 2026-04-07 04:46:13.033856 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-07 04:46:13.033868 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-07 04:46:13.033878 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-07 04:46:13.033889 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-07 04:46:13.033900 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-07 04:46:13.033920 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-07 04:46:26.355101 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-07 04:46:26.355214 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-07 04:46:26.355230 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-07 04:46:26.355242 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-07 04:46:26.355253 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-07 04:46:26.355373 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-07 04:46:26.355386 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 04:46:26.355398 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 04:46:26.355409 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 04:46:26.355420 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 04:46:26.355430 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 04:46:26.355441 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-07 04:46:26.355453 | orchestrator | 2026-04-07 04:46:26.355465 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-07 04:46:26.355477 | orchestrator | Tuesday 07 April 2026 04:46:17 +0000 (0:00:07.562) 0:01:15.592 ********* 2026-04-07 04:46:26.355489 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-07 04:46:26.355500 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:46:26.355512 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-07 04:46:26.355523 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:46:26.355533 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-07 04:46:26.355544 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:46:26.355560 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-04-07 04:46:26.355579 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-04-07 04:46:26.355599 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-04-07 04:46:26.355618 | orchestrator | 2026-04-07 04:46:26.355638 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-07 04:46:26.355658 | orchestrator | Tuesday 07 April 2026 04:46:21 +0000 (0:00:03.204) 0:01:18.797 ********* 2026-04-07 04:46:26.355678 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-07 04:46:26.355698 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:46:26.355719 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-07 04:46:26.355741 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:46:26.355761 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-07 04:46:26.355788 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:46:26.355801 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-07 04:46:26.355821 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-07 04:46:26.355840 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-07 04:46:26.355859 | orchestrator | 2026-04-07 04:46:26.355878 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:46:26.355898 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 04:46:26.355919 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 04:46:26.355939 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-07 04:46:26.355958 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:46:26.355979 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:46:26.356014 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-07 04:46:26.356034 | orchestrator | 2026-04-07 04:46:26.356045 | orchestrator | 2026-04-07 04:46:26.356056 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:46:26.356067 | orchestrator | Tuesday 07 April 2026 04:46:25 +0000 (0:00:04.928) 0:01:23.726 ********* 2026-04-07 04:46:26.356078 | orchestrator | =============================================================================== 2026-04-07 04:46:26.356089 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.45s 2026-04-07 04:46:26.356119 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.49s 2026-04-07 04:46:26.356131 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.56s 2026-04-07 04:46:26.356142 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.93s 2026-04-07 04:46:26.356153 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.90s 2026-04-07 04:46:26.356164 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.73s 2026-04-07 04:46:26.356174 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.67s 2026-04-07 04:46:26.356185 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.52s 2026-04-07 04:46:26.356196 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.21s 2026-04-07 04:46:26.356207 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.83s 2026-04-07 04:46:26.356218 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.81s 2026-04-07 04:46:26.356229 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.73s 2026-04-07 04:46:26.356240 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.69s 2026-04-07 04:46:26.356251 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.57s 2026-04-07 04:46:26.356289 | orchestrator | module-load : Load modules ---------------------------------------------- 2.54s 2026-04-07 04:46:26.356309 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.50s 2026-04-07 04:46:26.356328 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.43s 2026-04-07 04:46:26.356345 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.35s 2026-04-07 04:46:26.356363 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.26s 2026-04-07 04:46:26.356381 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.83s 2026-04-07 04:46:26.550656 | orchestrator | + osism apply -a upgrade ovn 2026-04-07 04:46:28.004197 | orchestrator | 2026-04-07 04:46:28 | INFO  | Prepare task for execution of ovn. 2026-04-07 04:46:28.070957 | orchestrator | 2026-04-07 04:46:28 | INFO  | Task 85d7ada5-0353-4808-93c0-4bc03bd6cf11 (ovn) was prepared for execution. 2026-04-07 04:46:28.071039 | orchestrator | 2026-04-07 04:46:28 | INFO  | It takes a moment until task 85d7ada5-0353-4808-93c0-4bc03bd6cf11 (ovn) has been started and output is visible here. 2026-04-07 04:46:49.391462 | orchestrator | 2026-04-07 04:46:49.391564 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-07 04:46:49.391577 | orchestrator | 2026-04-07 04:46:49.391586 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-07 04:46:49.391595 | orchestrator | Tuesday 07 April 2026 04:46:33 +0000 (0:00:01.998) 0:00:01.999 ********* 2026-04-07 04:46:49.391603 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:46:49.391612 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:46:49.391620 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:46:49.391628 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:46:49.391635 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:46:49.391643 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:46:49.391675 | orchestrator | 2026-04-07 04:46:49.391683 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-07 04:46:49.391703 | orchestrator | Tuesday 07 April 2026 04:46:36 +0000 (0:00:02.781) 0:00:04.780 ********* 2026-04-07 04:46:49.391712 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-07 04:46:49.391721 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-07 04:46:49.391729 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-07 04:46:49.391737 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-07 04:46:49.391745 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-07 04:46:49.391753 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-07 04:46:49.391760 | orchestrator | 2026-04-07 04:46:49.391768 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-07 04:46:49.391776 | orchestrator | 2026-04-07 04:46:49.391784 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-07 04:46:49.391792 | orchestrator | Tuesday 07 April 2026 04:46:39 +0000 (0:00:03.127) 0:00:07.907 ********* 2026-04-07 04:46:49.391800 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 04:46:49.391809 | orchestrator | 2026-04-07 04:46:49.391817 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-07 04:46:49.391825 | orchestrator | Tuesday 07 April 2026 04:46:43 +0000 (0:00:03.944) 0:00:11.851 ********* 2026-04-07 04:46:49.391834 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.391845 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.391853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.391861 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.391870 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.391900 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.391925 | orchestrator | 2026-04-07 04:46:49.391939 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-07 04:46:49.391953 | orchestrator | Tuesday 07 April 2026 04:46:46 +0000 (0:00:02.967) 0:00:14.819 ********* 2026-04-07 04:46:49.391974 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.391990 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.392005 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.392020 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.392034 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.392049 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.392064 | orchestrator | 2026-04-07 04:46:49.392078 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-07 04:46:49.392094 | orchestrator | Tuesday 07 April 2026 04:46:48 +0000 (0:00:02.547) 0:00:17.367 ********* 2026-04-07 04:46:49.392108 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.392135 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:49.392162 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880749 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880836 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880846 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880854 | orchestrator | 2026-04-07 04:46:58.880863 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-07 04:46:58.880872 | orchestrator | Tuesday 07 April 2026 04:46:50 +0000 (0:00:02.006) 0:00:19.374 ********* 2026-04-07 04:46:58.880879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880894 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880920 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880928 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880948 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.880955 | orchestrator | 2026-04-07 04:46:58.880963 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-07 04:46:58.880970 | orchestrator | Tuesday 07 April 2026 04:46:54 +0000 (0:00:03.590) 0:00:22.964 ********* 2026-04-07 04:46:58.880979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.881013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.881021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.881029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.881036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.881053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:46:58.881061 | orchestrator | 2026-04-07 04:46:58.881068 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-07 04:46:58.881077 | orchestrator | Tuesday 07 April 2026 04:46:56 +0000 (0:00:02.542) 0:00:25.506 ********* 2026-04-07 04:46:58.881084 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:46:58.881092 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:46:58.881100 | orchestrator | } 2026-04-07 04:46:58.881108 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:46:58.881115 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:46:58.881122 | orchestrator | } 2026-04-07 04:46:58.881129 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:46:58.881136 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:46:58.881143 | orchestrator | } 2026-04-07 04:46:58.881150 | orchestrator | changed: [testbed-node-3] => { 2026-04-07 04:46:58.881157 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:46:58.881164 | orchestrator | } 2026-04-07 04:46:58.881171 | orchestrator | changed: [testbed-node-4] => { 2026-04-07 04:46:58.881179 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:46:58.881186 | orchestrator | } 2026-04-07 04:46:58.881193 | orchestrator | changed: [testbed-node-5] => { 2026-04-07 04:46:58.881200 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:46:58.881207 | orchestrator | } 2026-04-07 04:46:58.881214 | orchestrator | 2026-04-07 04:46:58.881222 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:46:58.881229 | orchestrator | Tuesday 07 April 2026 04:46:58 +0000 (0:00:01.854) 0:00:27.361 ********* 2026-04-07 04:46:58.881246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:47:22.261294 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:47:22.261456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:47:22.261510 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:47:22.261523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:47:22.261534 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:47:22.261544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:47:22.261572 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:47:22.261582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:47:22.261591 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:47:22.261600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:47:22.261609 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:47:22.261618 | orchestrator | 2026-04-07 04:47:22.261628 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-07 04:47:22.261639 | orchestrator | Tuesday 07 April 2026 04:47:01 +0000 (0:00:02.596) 0:00:29.958 ********* 2026-04-07 04:47:22.261648 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:47:22.261657 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:47:22.261666 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:47:22.261675 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:47:22.261683 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:47:22.261692 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:47:22.261700 | orchestrator | 2026-04-07 04:47:22.261709 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-07 04:47:22.261718 | orchestrator | Tuesday 07 April 2026 04:47:05 +0000 (0:00:03.645) 0:00:33.604 ********* 2026-04-07 04:47:22.261727 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-07 04:47:22.261736 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-07 04:47:22.261745 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-07 04:47:22.261754 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-07 04:47:22.261763 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-07 04:47:22.261771 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-07 04:47:22.261780 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 04:47:22.261788 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 04:47:22.261797 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 04:47:22.261805 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 04:47:22.261814 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 04:47:22.261854 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-07 04:47:22.261866 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-07 04:47:22.261877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-07 04:47:22.261888 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-07 04:47:22.261906 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-07 04:47:22.261917 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-07 04:47:22.261927 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-07 04:47:22.261937 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 04:47:22.261948 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 04:47:22.261958 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 04:47:22.261968 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 04:47:22.261978 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 04:47:22.261989 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-07 04:47:22.262000 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 04:47:22.262010 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 04:47:22.262072 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 04:47:22.262082 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 04:47:22.262092 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 04:47:22.262102 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-07 04:47:22.262112 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 04:47:22.262122 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 04:47:22.262132 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 04:47:22.262143 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 04:47:22.262153 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 04:47:22.262164 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-07 04:47:22.262174 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 04:47:22.262185 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 04:47:22.262194 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 04:47:22.262203 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-07 04:47:22.262211 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 04:47:22.262220 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-07 04:47:22.262229 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-07 04:47:22.262239 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-07 04:47:22.262248 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-07 04:47:22.262263 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-07 04:47:22.262272 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-07 04:47:22.262292 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-07 04:50:20.405402 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 04:50:20.405519 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 04:50:20.405536 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 04:50:20.405548 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-07 04:50:20.405609 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 04:50:20.405623 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-07 04:50:20.405634 | orchestrator | 2026-04-07 04:50:20.405647 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 04:50:20.405659 | orchestrator | Tuesday 07 April 2026 04:47:25 +0000 (0:00:20.451) 0:00:54.055 ********* 2026-04-07 04:50:20.405670 | orchestrator | 2026-04-07 04:50:20.405681 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 04:50:20.405692 | orchestrator | Tuesday 07 April 2026 04:47:25 +0000 (0:00:00.460) 0:00:54.516 ********* 2026-04-07 04:50:20.405703 | orchestrator | 2026-04-07 04:50:20.405714 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 04:50:20.405726 | orchestrator | Tuesday 07 April 2026 04:47:26 +0000 (0:00:00.451) 0:00:54.967 ********* 2026-04-07 04:50:20.405737 | orchestrator | 2026-04-07 04:50:20.405747 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 04:50:20.405759 | orchestrator | Tuesday 07 April 2026 04:47:26 +0000 (0:00:00.593) 0:00:55.561 ********* 2026-04-07 04:50:20.405771 | orchestrator | 2026-04-07 04:50:20.405782 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 04:50:20.405793 | orchestrator | Tuesday 07 April 2026 04:47:27 +0000 (0:00:00.468) 0:00:56.030 ********* 2026-04-07 04:50:20.405804 | orchestrator | 2026-04-07 04:50:20.405815 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-07 04:50:20.405826 | orchestrator | Tuesday 07 April 2026 04:47:27 +0000 (0:00:00.457) 0:00:56.487 ********* 2026-04-07 04:50:20.405837 | orchestrator | 2026-04-07 04:50:20.405848 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-07 04:50:20.405859 | orchestrator | Tuesday 07 April 2026 04:47:28 +0000 (0:00:00.894) 0:00:57.381 ********* 2026-04-07 04:50:20.405869 | orchestrator | 2026-04-07 04:50:20.405881 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-04-07 04:50:20.405892 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:50:20.405905 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:50:20.405916 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:50:20.405926 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:50:20.405937 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:50:20.405950 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:50:20.405964 | orchestrator | 2026-04-07 04:50:20.405976 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-07 04:50:20.405989 | orchestrator | 2026-04-07 04:50:20.406003 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 04:50:20.406095 | orchestrator | Tuesday 07 April 2026 04:49:46 +0000 (0:02:17.362) 0:03:14.744 ********* 2026-04-07 04:50:20.406113 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:50:20.406126 | orchestrator | 2026-04-07 04:50:20.406137 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 04:50:20.406149 | orchestrator | Tuesday 07 April 2026 04:49:48 +0000 (0:00:01.906) 0:03:16.651 ********* 2026-04-07 04:50:20.406160 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-07 04:50:20.406171 | orchestrator | 2026-04-07 04:50:20.406182 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-07 04:50:20.406193 | orchestrator | Tuesday 07 April 2026 04:49:50 +0000 (0:00:02.038) 0:03:18.689 ********* 2026-04-07 04:50:20.406205 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.406217 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.406228 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.406240 | orchestrator | 2026-04-07 04:50:20.406257 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-07 04:50:20.406276 | orchestrator | Tuesday 07 April 2026 04:49:52 +0000 (0:00:01.920) 0:03:20.610 ********* 2026-04-07 04:50:20.406299 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.406327 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.406346 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.406365 | orchestrator | 2026-04-07 04:50:20.406385 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-07 04:50:20.406403 | orchestrator | Tuesday 07 April 2026 04:49:53 +0000 (0:00:01.349) 0:03:21.959 ********* 2026-04-07 04:50:20.406422 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.406440 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.406458 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.406477 | orchestrator | 2026-04-07 04:50:20.406496 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-07 04:50:20.406516 | orchestrator | Tuesday 07 April 2026 04:49:54 +0000 (0:00:01.399) 0:03:23.359 ********* 2026-04-07 04:50:20.406538 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.406556 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.406611 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.406632 | orchestrator | 2026-04-07 04:50:20.406650 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-07 04:50:20.406689 | orchestrator | Tuesday 07 April 2026 04:49:56 +0000 (0:00:01.348) 0:03:24.708 ********* 2026-04-07 04:50:20.406709 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.406754 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.406773 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.406790 | orchestrator | 2026-04-07 04:50:20.406809 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-07 04:50:20.406827 | orchestrator | Tuesday 07 April 2026 04:49:57 +0000 (0:00:01.332) 0:03:26.040 ********* 2026-04-07 04:50:20.406847 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:50:20.406865 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:50:20.406884 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:50:20.406902 | orchestrator | 2026-04-07 04:50:20.406921 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-07 04:50:20.406940 | orchestrator | Tuesday 07 April 2026 04:49:59 +0000 (0:00:01.564) 0:03:27.605 ********* 2026-04-07 04:50:20.406957 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.406975 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.406994 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.407013 | orchestrator | 2026-04-07 04:50:20.407030 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-07 04:50:20.407041 | orchestrator | Tuesday 07 April 2026 04:50:00 +0000 (0:00:01.812) 0:03:29.417 ********* 2026-04-07 04:50:20.407069 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.407080 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.407117 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.407128 | orchestrator | 2026-04-07 04:50:20.407139 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-07 04:50:20.407150 | orchestrator | Tuesday 07 April 2026 04:50:02 +0000 (0:00:01.426) 0:03:30.844 ********* 2026-04-07 04:50:20.407161 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.407171 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.407182 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.407193 | orchestrator | 2026-04-07 04:50:20.407204 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-07 04:50:20.407215 | orchestrator | Tuesday 07 April 2026 04:50:04 +0000 (0:00:01.843) 0:03:32.687 ********* 2026-04-07 04:50:20.407226 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.407236 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.407247 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.407258 | orchestrator | 2026-04-07 04:50:20.407269 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-07 04:50:20.407280 | orchestrator | Tuesday 07 April 2026 04:50:05 +0000 (0:00:01.767) 0:03:34.455 ********* 2026-04-07 04:50:20.407291 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:50:20.407301 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:50:20.407312 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:50:20.407323 | orchestrator | 2026-04-07 04:50:20.407334 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-07 04:50:20.407345 | orchestrator | Tuesday 07 April 2026 04:50:07 +0000 (0:00:01.387) 0:03:35.842 ********* 2026-04-07 04:50:20.407356 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:50:20.407367 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:50:20.407377 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:50:20.407388 | orchestrator | 2026-04-07 04:50:20.407399 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-07 04:50:20.407410 | orchestrator | Tuesday 07 April 2026 04:50:08 +0000 (0:00:01.426) 0:03:37.269 ********* 2026-04-07 04:50:20.407421 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.407432 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.407443 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.407453 | orchestrator | 2026-04-07 04:50:20.407464 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-07 04:50:20.407475 | orchestrator | Tuesday 07 April 2026 04:50:10 +0000 (0:00:01.959) 0:03:39.228 ********* 2026-04-07 04:50:20.407486 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.407497 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.407507 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.407518 | orchestrator | 2026-04-07 04:50:20.407529 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-07 04:50:20.407540 | orchestrator | Tuesday 07 April 2026 04:50:11 +0000 (0:00:01.315) 0:03:40.544 ********* 2026-04-07 04:50:20.407551 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.407596 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.407608 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.407619 | orchestrator | 2026-04-07 04:50:20.407630 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-07 04:50:20.407641 | orchestrator | Tuesday 07 April 2026 04:50:13 +0000 (0:00:01.855) 0:03:42.400 ********* 2026-04-07 04:50:20.407652 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:50:20.407662 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:50:20.407673 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:50:20.407684 | orchestrator | 2026-04-07 04:50:20.407695 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-07 04:50:20.407706 | orchestrator | Tuesday 07 April 2026 04:50:15 +0000 (0:00:01.449) 0:03:43.850 ********* 2026-04-07 04:50:20.407717 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:50:20.407728 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:50:20.407739 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:50:20.407750 | orchestrator | 2026-04-07 04:50:20.407761 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-07 04:50:20.407779 | orchestrator | Tuesday 07 April 2026 04:50:16 +0000 (0:00:01.618) 0:03:45.469 ********* 2026-04-07 04:50:20.407790 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:50:20.407801 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:50:20.407811 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:50:20.407822 | orchestrator | 2026-04-07 04:50:20.407834 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-07 04:50:20.407844 | orchestrator | Tuesday 07 April 2026 04:50:18 +0000 (0:00:01.750) 0:03:47.219 ********* 2026-04-07 04:50:20.407879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455665 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455746 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455753 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455757 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455774 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455778 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:26.455809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:26.455813 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:26.455821 | orchestrator | 2026-04-07 04:50:26.455826 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-07 04:50:26.455831 | orchestrator | Tuesday 07 April 2026 04:50:22 +0000 (0:00:03.819) 0:03:51.039 ********* 2026-04-07 04:50:26.455835 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455840 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455847 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455854 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:26.455861 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.643958 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644102 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:41.644129 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:41.644190 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:41.644247 | orchestrator | 2026-04-07 04:50:41.644265 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-07 04:50:41.644285 | orchestrator | Tuesday 07 April 2026 04:50:28 +0000 (0:00:06.036) 0:03:57.076 ********* 2026-04-07 04:50:41.644304 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-07 04:50:41.644323 | orchestrator | 2026-04-07 04:50:41.644339 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-07 04:50:41.644349 | orchestrator | Tuesday 07 April 2026 04:50:30 +0000 (0:00:02.027) 0:03:59.104 ********* 2026-04-07 04:50:41.644359 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:50:41.644371 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:50:41.644398 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:50:41.644408 | orchestrator | 2026-04-07 04:50:41.644432 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-07 04:50:41.644442 | orchestrator | Tuesday 07 April 2026 04:50:32 +0000 (0:00:01.678) 0:04:00.782 ********* 2026-04-07 04:50:41.644452 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:50:41.644462 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:50:41.644471 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:50:41.644481 | orchestrator | 2026-04-07 04:50:41.644491 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-07 04:50:41.644500 | orchestrator | Tuesday 07 April 2026 04:50:35 +0000 (0:00:02.884) 0:04:03.666 ********* 2026-04-07 04:50:41.644510 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:50:41.644519 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:50:41.644529 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:50:41.644538 | orchestrator | 2026-04-07 04:50:41.644548 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-07 04:50:41.644558 | orchestrator | Tuesday 07 April 2026 04:50:37 +0000 (0:00:02.652) 0:04:06.318 ********* 2026-04-07 04:50:41.644569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:41.644701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:46.805878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:46.806120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:50:46.806145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806157 | orchestrator | 2026-04-07 04:50:46.806171 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-07 04:50:46.806183 | orchestrator | Tuesday 07 April 2026 04:50:43 +0000 (0:00:05.376) 0:04:11.695 ********* 2026-04-07 04:50:46.806199 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:50:46.806218 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:50:46.806235 | orchestrator | } 2026-04-07 04:50:46.806251 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:50:46.806266 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:50:46.806281 | orchestrator | } 2026-04-07 04:50:46.806315 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:50:46.806334 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:50:46.806345 | orchestrator | } 2026-04-07 04:50:46.806356 | orchestrator | 2026-04-07 04:50:46.806367 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-07 04:50:46.806378 | orchestrator | Tuesday 07 April 2026 04:50:44 +0000 (0:00:01.388) 0:04:13.084 ********* 2026-04-07 04:50:46.806391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-07 04:50:46.806567 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-07 04:52:40.295055 | orchestrator | 2026-04-07 04:52:40.295170 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-07 04:52:40.295187 | orchestrator | Tuesday 07 April 2026 04:50:47 +0000 (0:00:03.479) 0:04:16.564 ********* 2026-04-07 04:52:40.295199 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-07 04:52:40.295211 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-07 04:52:40.295221 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-07 04:52:40.295230 | orchestrator | 2026-04-07 04:52:40.295241 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-07 04:52:40.295251 | orchestrator | Tuesday 07 April 2026 04:51:10 +0000 (0:00:22.167) 0:04:38.732 ********* 2026-04-07 04:52:40.295261 | orchestrator | changed: [testbed-node-0] => { 2026-04-07 04:52:40.295271 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:52:40.295281 | orchestrator | } 2026-04-07 04:52:40.295291 | orchestrator | changed: [testbed-node-1] => { 2026-04-07 04:52:40.295301 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:52:40.295311 | orchestrator | } 2026-04-07 04:52:40.295321 | orchestrator | changed: [testbed-node-2] => { 2026-04-07 04:52:40.295330 | orchestrator |  "msg": "Notifying handlers" 2026-04-07 04:52:40.295340 | orchestrator | } 2026-04-07 04:52:40.295350 | orchestrator | 2026-04-07 04:52:40.295360 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 04:52:40.295369 | orchestrator | Tuesday 07 April 2026 04:51:11 +0000 (0:00:01.412) 0:04:40.144 ********* 2026-04-07 04:52:40.295379 | orchestrator | 2026-04-07 04:52:40.295389 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 04:52:40.295398 | orchestrator | Tuesday 07 April 2026 04:51:12 +0000 (0:00:00.456) 0:04:40.601 ********* 2026-04-07 04:52:40.295409 | orchestrator | 2026-04-07 04:52:40.295419 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-07 04:52:40.295429 | orchestrator | Tuesday 07 April 2026 04:51:12 +0000 (0:00:00.430) 0:04:41.032 ********* 2026-04-07 04:52:40.295438 | orchestrator | 2026-04-07 04:52:40.295448 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-07 04:52:40.295458 | orchestrator | Tuesday 07 April 2026 04:51:13 +0000 (0:00:00.794) 0:04:41.826 ********* 2026-04-07 04:52:40.295467 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:52:40.295477 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:52:40.295486 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:52:40.295496 | orchestrator | 2026-04-07 04:52:40.295506 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-07 04:52:40.295515 | orchestrator | Tuesday 07 April 2026 04:51:29 +0000 (0:00:16.501) 0:04:58.327 ********* 2026-04-07 04:52:40.295525 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:52:40.295535 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:52:40.295544 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:52:40.295576 | orchestrator | 2026-04-07 04:52:40.295587 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-07 04:52:40.295598 | orchestrator | Tuesday 07 April 2026 04:51:46 +0000 (0:00:16.749) 0:05:15.077 ********* 2026-04-07 04:52:40.295610 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-07 04:52:40.295622 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-07 04:52:40.295634 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-07 04:52:40.295645 | orchestrator | 2026-04-07 04:52:40.295656 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-07 04:52:40.295668 | orchestrator | Tuesday 07 April 2026 04:52:02 +0000 (0:00:16.021) 0:05:31.098 ********* 2026-04-07 04:52:40.295679 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:52:40.295690 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:52:40.295701 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:52:40.295712 | orchestrator | 2026-04-07 04:52:40.295736 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-07 04:52:40.295769 | orchestrator | Tuesday 07 April 2026 04:52:19 +0000 (0:00:16.632) 0:05:47.731 ********* 2026-04-07 04:52:40.295782 | orchestrator | Pausing for 5 seconds 2026-04-07 04:52:40.295794 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:52:40.295805 | orchestrator | 2026-04-07 04:52:40.295817 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-07 04:52:40.295828 | orchestrator | Tuesday 07 April 2026 04:52:25 +0000 (0:00:06.177) 0:05:53.908 ********* 2026-04-07 04:52:40.295840 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:52:40.295851 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:52:40.295863 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:52:40.295874 | orchestrator | 2026-04-07 04:52:40.295885 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-07 04:52:40.295896 | orchestrator | Tuesday 07 April 2026 04:52:27 +0000 (0:00:01.930) 0:05:55.838 ********* 2026-04-07 04:52:40.295907 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:52:40.295918 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:52:40.295929 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:52:40.295940 | orchestrator | 2026-04-07 04:52:40.295952 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-07 04:52:40.295962 | orchestrator | Tuesday 07 April 2026 04:52:29 +0000 (0:00:01.859) 0:05:57.698 ********* 2026-04-07 04:52:40.295972 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:52:40.295982 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:52:40.295991 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:52:40.296001 | orchestrator | 2026-04-07 04:52:40.296011 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-07 04:52:40.296021 | orchestrator | Tuesday 07 April 2026 04:52:31 +0000 (0:00:01.878) 0:05:59.576 ********* 2026-04-07 04:52:40.296030 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:52:40.296040 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:52:40.296050 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:52:40.296059 | orchestrator | 2026-04-07 04:52:40.296069 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-07 04:52:40.296079 | orchestrator | Tuesday 07 April 2026 04:52:32 +0000 (0:00:01.788) 0:06:01.364 ********* 2026-04-07 04:52:40.296088 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:52:40.296098 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:52:40.296107 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:52:40.296117 | orchestrator | 2026-04-07 04:52:40.296126 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-07 04:52:40.296152 | orchestrator | Tuesday 07 April 2026 04:52:34 +0000 (0:00:02.183) 0:06:03.548 ********* 2026-04-07 04:52:40.296162 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:52:40.296172 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:52:40.296181 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:52:40.296191 | orchestrator | 2026-04-07 04:52:40.296201 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-07 04:52:40.296211 | orchestrator | Tuesday 07 April 2026 04:52:37 +0000 (0:00:02.245) 0:06:05.793 ********* 2026-04-07 04:52:40.296232 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-07 04:52:40.296242 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-07 04:52:40.296252 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-07 04:52:40.296261 | orchestrator | 2026-04-07 04:52:40.296271 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 04:52:40.296282 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 04:52:40.296293 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-07 04:52:40.296303 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-07 04:52:40.296313 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:52:40.296322 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:52:40.296332 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 04:52:40.296341 | orchestrator | 2026-04-07 04:52:40.296351 | orchestrator | 2026-04-07 04:52:40.296361 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 04:52:40.296371 | orchestrator | Tuesday 07 April 2026 04:52:39 +0000 (0:00:02.629) 0:06:08.423 ********* 2026-04-07 04:52:40.296380 | orchestrator | =============================================================================== 2026-04-07 04:52:40.296390 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 137.36s 2026-04-07 04:52:40.296400 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 22.17s 2026-04-07 04:52:40.296409 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.45s 2026-04-07 04:52:40.296419 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.75s 2026-04-07 04:52:40.296428 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.63s 2026-04-07 04:52:40.296438 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.50s 2026-04-07 04:52:40.296447 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 16.02s 2026-04-07 04:52:40.296457 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.18s 2026-04-07 04:52:40.296466 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.04s 2026-04-07 04:52:40.296481 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.38s 2026-04-07 04:52:40.296491 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.94s 2026-04-07 04:52:40.296501 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.82s 2026-04-07 04:52:40.296511 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.65s 2026-04-07 04:52:40.296521 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.59s 2026-04-07 04:52:40.296530 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.48s 2026-04-07 04:52:40.296540 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.33s 2026-04-07 04:52:40.296549 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.13s 2026-04-07 04:52:40.296564 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.97s 2026-04-07 04:52:40.296580 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.88s 2026-04-07 04:52:40.296595 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.78s 2026-04-07 04:52:40.502887 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-07 04:52:40.502982 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-07 04:52:40.502997 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-04-07 04:52:40.510675 | orchestrator | + set -e 2026-04-07 04:52:40.510735 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-07 04:52:40.510785 | orchestrator | ++ export INTERACTIVE=false 2026-04-07 04:52:40.510805 | orchestrator | ++ INTERACTIVE=false 2026-04-07 04:52:40.510824 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-07 04:52:40.510842 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-07 04:52:40.510860 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-04-07 04:52:41.855878 | orchestrator | 2026-04-07 04:52:41 | INFO  | Prepare task for execution of ceph-rolling_update. 2026-04-07 04:52:41.937582 | orchestrator | 2026-04-07 04:52:41 | INFO  | Task 708a9a9a-555e-4514-8c93-c935201467fc (ceph-rolling_update) was prepared for execution. 2026-04-07 04:52:41.937679 | orchestrator | 2026-04-07 04:52:41 | INFO  | It takes a moment until task 708a9a9a-555e-4514-8c93-c935201467fc (ceph-rolling_update) has been started and output is visible here. 2026-04-07 04:54:05.353452 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 04:54:05.353567 | orchestrator | 2.16.14 2026-04-07 04:54:05.353584 | orchestrator | 2026-04-07 04:54:05.353597 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-04-07 04:54:05.353610 | orchestrator | 2026-04-07 04:54:05.353621 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-04-07 04:54:05.353633 | orchestrator | Tuesday 07 April 2026 04:52:49 +0000 (0:00:01.666) 0:00:01.666 ********* 2026-04-07 04:54:05.353644 | orchestrator | skipping: [localhost] 2026-04-07 04:54:05.353656 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-04-07 04:54:05.353668 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-04-07 04:54:05.353679 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-04-07 04:54:05.353690 | orchestrator | 2026-04-07 04:54:05.353701 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-04-07 04:54:05.353712 | orchestrator | 2026-04-07 04:54:05.353723 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-04-07 04:54:05.353734 | orchestrator | Tuesday 07 April 2026 04:52:52 +0000 (0:00:03.269) 0:00:04.936 ********* 2026-04-07 04:54:05.353746 | orchestrator | ok: [testbed-node-0] => { 2026-04-07 04:54:05.353757 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 04:54:05.353769 | orchestrator | } 2026-04-07 04:54:05.353780 | orchestrator | ok: [testbed-node-1] => { 2026-04-07 04:54:05.353791 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 04:54:05.353802 | orchestrator | } 2026-04-07 04:54:05.353813 | orchestrator | ok: [testbed-node-2] => { 2026-04-07 04:54:05.353824 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 04:54:05.353835 | orchestrator | } 2026-04-07 04:54:05.353890 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 04:54:05.353903 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 04:54:05.353914 | orchestrator | } 2026-04-07 04:54:05.353925 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 04:54:05.353936 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 04:54:05.353947 | orchestrator | } 2026-04-07 04:54:05.353958 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 04:54:05.353969 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 04:54:05.353981 | orchestrator | } 2026-04-07 04:54:05.353995 | orchestrator | ok: [testbed-manager] => { 2026-04-07 04:54:05.354009 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 04:54:05.354107 | orchestrator | } 2026-04-07 04:54:05.354148 | orchestrator | 2026-04-07 04:54:05.354161 | orchestrator | TASK [Gather facts] ************************************************************ 2026-04-07 04:54:05.354174 | orchestrator | Tuesday 07 April 2026 04:52:57 +0000 (0:00:04.620) 0:00:09.556 ********* 2026-04-07 04:54:05.354186 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:05.354199 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:05.354212 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:05.354224 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:05.354237 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:05.354249 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:05.354262 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.354275 | orchestrator | 2026-04-07 04:54:05.354288 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-04-07 04:54:05.354301 | orchestrator | Tuesday 07 April 2026 04:53:04 +0000 (0:00:06.561) 0:00:16.117 ********* 2026-04-07 04:54:05.354327 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 04:54:05.354341 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 04:54:05.354352 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 04:54:05.354363 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:54:05.354374 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:54:05.354385 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:54:05.354396 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 04:54:05.354407 | orchestrator | 2026-04-07 04:54:05.354418 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-04-07 04:54:05.354429 | orchestrator | Tuesday 07 April 2026 04:53:35 +0000 (0:00:31.058) 0:00:47.176 ********* 2026-04-07 04:54:05.354440 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.354450 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.354461 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.354472 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.354483 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.354493 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.354504 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.354515 | orchestrator | 2026-04-07 04:54:05.354527 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-07 04:54:05.354538 | orchestrator | Tuesday 07 April 2026 04:53:37 +0000 (0:00:02.221) 0:00:49.397 ********* 2026-04-07 04:54:05.354549 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-07 04:54:05.354562 | orchestrator | 2026-04-07 04:54:05.354573 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-07 04:54:05.354584 | orchestrator | Tuesday 07 April 2026 04:53:40 +0000 (0:00:02.795) 0:00:52.193 ********* 2026-04-07 04:54:05.354595 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.354605 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.354616 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.354627 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.354637 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.354648 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.354659 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.354670 | orchestrator | 2026-04-07 04:54:05.354700 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-07 04:54:05.354712 | orchestrator | Tuesday 07 April 2026 04:53:42 +0000 (0:00:02.646) 0:00:54.839 ********* 2026-04-07 04:54:05.354723 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.354734 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.354744 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.354755 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.354766 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.354787 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.354798 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.354809 | orchestrator | 2026-04-07 04:54:05.354820 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 04:54:05.354831 | orchestrator | Tuesday 07 April 2026 04:53:45 +0000 (0:00:02.155) 0:00:56.995 ********* 2026-04-07 04:54:05.354842 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.354890 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.354902 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.354912 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.354923 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.354934 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.354945 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.354956 | orchestrator | 2026-04-07 04:54:05.354967 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 04:54:05.354979 | orchestrator | Tuesday 07 April 2026 04:53:47 +0000 (0:00:02.669) 0:00:59.665 ********* 2026-04-07 04:54:05.354989 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.355000 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.355011 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.355022 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.355033 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.355043 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.355054 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.355065 | orchestrator | 2026-04-07 04:54:05.355076 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-07 04:54:05.355087 | orchestrator | Tuesday 07 April 2026 04:53:49 +0000 (0:00:01.863) 0:01:01.529 ********* 2026-04-07 04:54:05.355098 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.355109 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.355119 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.355130 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.355141 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.355152 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.355163 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.355174 | orchestrator | 2026-04-07 04:54:05.355185 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-07 04:54:05.355195 | orchestrator | Tuesday 07 April 2026 04:53:51 +0000 (0:00:02.192) 0:01:03.721 ********* 2026-04-07 04:54:05.355206 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.355217 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.355228 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.355239 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.355250 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.355260 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.355271 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.355282 | orchestrator | 2026-04-07 04:54:05.355293 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-07 04:54:05.355304 | orchestrator | Tuesday 07 April 2026 04:53:53 +0000 (0:00:01.930) 0:01:05.652 ********* 2026-04-07 04:54:05.355315 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:05.355326 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:05.355337 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:05.355348 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:05.355359 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:05.355369 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:05.355380 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:05.355391 | orchestrator | 2026-04-07 04:54:05.355408 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-07 04:54:05.355420 | orchestrator | Tuesday 07 April 2026 04:53:55 +0000 (0:00:02.228) 0:01:07.880 ********* 2026-04-07 04:54:05.355431 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.355442 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.355453 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.355463 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.355474 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.355493 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.355504 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.355515 | orchestrator | 2026-04-07 04:54:05.355526 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-07 04:54:05.355537 | orchestrator | Tuesday 07 April 2026 04:53:57 +0000 (0:00:01.853) 0:01:09.733 ********* 2026-04-07 04:54:05.355548 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:54:05.355559 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:54:05.355570 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:54:05.355581 | orchestrator | 2026-04-07 04:54:05.355592 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-07 04:54:05.355603 | orchestrator | Tuesday 07 April 2026 04:53:59 +0000 (0:00:01.901) 0:01:11.634 ********* 2026-04-07 04:54:05.355614 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:05.355625 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:05.355636 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:05.355647 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:05.355657 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:05.355668 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:05.355679 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:05.355690 | orchestrator | 2026-04-07 04:54:05.355701 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-07 04:54:05.355712 | orchestrator | Tuesday 07 April 2026 04:54:01 +0000 (0:00:02.148) 0:01:13.783 ********* 2026-04-07 04:54:05.355723 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:54:05.355734 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:54:05.355745 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:54:05.355756 | orchestrator | 2026-04-07 04:54:05.355767 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-07 04:54:05.355778 | orchestrator | Tuesday 07 April 2026 04:54:05 +0000 (0:00:03.369) 0:01:17.152 ********* 2026-04-07 04:54:05.355796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 04:54:27.840316 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 04:54:27.840456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 04:54:27.840482 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:27.840501 | orchestrator | 2026-04-07 04:54:27.840521 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-07 04:54:27.840539 | orchestrator | Tuesday 07 April 2026 04:54:06 +0000 (0:00:01.472) 0:01:18.625 ********* 2026-04-07 04:54:27.840560 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 04:54:27.840585 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 04:54:27.840604 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 04:54:27.840623 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:27.840642 | orchestrator | 2026-04-07 04:54:27.840662 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-07 04:54:27.840681 | orchestrator | Tuesday 07 April 2026 04:54:08 +0000 (0:00:01.884) 0:01:20.509 ********* 2026-04-07 04:54:27.840703 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:27.840752 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:27.840781 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:27.840793 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:27.840804 | orchestrator | 2026-04-07 04:54:27.840815 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-07 04:54:27.840826 | orchestrator | Tuesday 07 April 2026 04:54:09 +0000 (0:00:01.152) 0:01:21.662 ********* 2026-04-07 04:54:27.840839 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4cd0634997ff', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-07 04:54:02.516644', 'end': '2026-04-07 04:54:02.583438', 'delta': '0:00:00.066794', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4cd0634997ff'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-07 04:54:27.840914 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e8d9f46c7c23', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-07 04:54:03.388807', 'end': '2026-04-07 04:54:03.429749', 'delta': '0:00:00.040942', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8d9f46c7c23'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-07 04:54:27.840931 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f4f6ca89ad43', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-07 04:54:03.920816', 'end': '2026-04-07 04:54:03.977110', 'delta': '0:00:00.056294', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f4f6ca89ad43'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-07 04:54:27.840945 | orchestrator | 2026-04-07 04:54:27.840958 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-07 04:54:27.840971 | orchestrator | Tuesday 07 April 2026 04:54:10 +0000 (0:00:01.198) 0:01:22.861 ********* 2026-04-07 04:54:27.840993 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:27.841007 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:27.841020 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:27.841032 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:27.841046 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:27.841058 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:27.841073 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:27.841091 | orchestrator | 2026-04-07 04:54:27.841109 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-07 04:54:27.841128 | orchestrator | Tuesday 07 April 2026 04:54:13 +0000 (0:00:02.135) 0:01:24.997 ********* 2026-04-07 04:54:27.841145 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:27.841163 | orchestrator | 2026-04-07 04:54:27.841180 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-07 04:54:27.841198 | orchestrator | Tuesday 07 April 2026 04:54:14 +0000 (0:00:01.271) 0:01:26.268 ********* 2026-04-07 04:54:27.841216 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:27.841234 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:27.841251 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:27.841268 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:27.841286 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:27.841305 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:27.841323 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:27.841340 | orchestrator | 2026-04-07 04:54:27.841359 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-07 04:54:27.841377 | orchestrator | Tuesday 07 April 2026 04:54:16 +0000 (0:00:02.210) 0:01:28.479 ********* 2026-04-07 04:54:27.841396 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:27.841416 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-07 04:54:27.841434 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-07 04:54:27.841454 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 04:54:27.841472 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-07 04:54:27.841499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-07 04:54:27.841518 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 04:54:27.841538 | orchestrator | 2026-04-07 04:54:27.841557 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 04:54:27.841572 | orchestrator | Tuesday 07 April 2026 04:54:19 +0000 (0:00:03.416) 0:01:31.896 ********* 2026-04-07 04:54:27.841590 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:27.841608 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:27.841627 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:27.841646 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:27.841664 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:27.841684 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:27.841703 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:27.841722 | orchestrator | 2026-04-07 04:54:27.841740 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-07 04:54:27.841759 | orchestrator | Tuesday 07 April 2026 04:54:22 +0000 (0:00:02.252) 0:01:34.149 ********* 2026-04-07 04:54:27.841777 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:27.841796 | orchestrator | 2026-04-07 04:54:27.841807 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-07 04:54:27.841818 | orchestrator | Tuesday 07 April 2026 04:54:23 +0000 (0:00:01.145) 0:01:35.294 ********* 2026-04-07 04:54:27.841830 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:27.841841 | orchestrator | 2026-04-07 04:54:27.841852 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 04:54:27.841863 | orchestrator | Tuesday 07 April 2026 04:54:24 +0000 (0:00:01.230) 0:01:36.525 ********* 2026-04-07 04:54:27.841894 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:27.841905 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:27.841916 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:27.841940 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:27.841951 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:27.841962 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:27.841973 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:27.841984 | orchestrator | 2026-04-07 04:54:27.841995 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-07 04:54:27.842005 | orchestrator | Tuesday 07 April 2026 04:54:27 +0000 (0:00:02.485) 0:01:39.010 ********* 2026-04-07 04:54:27.842082 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:27.842097 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:27.842108 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:27.842119 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:27.842129 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:27.842140 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:27.842164 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:39.623065 | orchestrator | 2026-04-07 04:54:39.623178 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-07 04:54:39.623196 | orchestrator | Tuesday 07 April 2026 04:54:28 +0000 (0:00:01.960) 0:01:40.970 ********* 2026-04-07 04:54:39.623209 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:39.623221 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:39.623233 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:39.623244 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:39.623255 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:39.623266 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:39.623277 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:39.623287 | orchestrator | 2026-04-07 04:54:39.623299 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-07 04:54:39.623310 | orchestrator | Tuesday 07 April 2026 04:54:31 +0000 (0:00:02.205) 0:01:43.176 ********* 2026-04-07 04:54:39.623321 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:39.623332 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:39.623343 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:39.623354 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:39.623364 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:39.623375 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:39.623386 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:39.623397 | orchestrator | 2026-04-07 04:54:39.623408 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-07 04:54:39.623419 | orchestrator | Tuesday 07 April 2026 04:54:33 +0000 (0:00:01.953) 0:01:45.129 ********* 2026-04-07 04:54:39.623430 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:39.623440 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:39.623451 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:39.623462 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:39.623473 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:39.623484 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:39.623495 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:39.623506 | orchestrator | 2026-04-07 04:54:39.623520 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-07 04:54:39.623533 | orchestrator | Tuesday 07 April 2026 04:54:35 +0000 (0:00:02.108) 0:01:47.237 ********* 2026-04-07 04:54:39.623545 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:39.623558 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:39.623571 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:39.623583 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:39.623596 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:39.623609 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:39.623622 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:39.623634 | orchestrator | 2026-04-07 04:54:39.623647 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-07 04:54:39.623661 | orchestrator | Tuesday 07 April 2026 04:54:37 +0000 (0:00:01.953) 0:01:49.191 ********* 2026-04-07 04:54:39.623700 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:39.623713 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:39.623727 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:39.623739 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:39.623752 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:39.623764 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:39.623777 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:39.623789 | orchestrator | 2026-04-07 04:54:39.623802 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-07 04:54:39.623815 | orchestrator | Tuesday 07 April 2026 04:54:39 +0000 (0:00:02.248) 0:01:51.440 ********* 2026-04-07 04:54:39.623844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.623861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.623875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.623928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 04:54:39.623944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.623957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.623968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.623999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cddfb89c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:39.624014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.624035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.932648 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:39.932747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.932766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.932779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.932816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 04:54:39.932846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.932859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.932870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.932971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '36ff44a1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:39.932999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.933011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.933022 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:39.933040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.933052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.933063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:39.933074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 04:54:39.933095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.156029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.156175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.156212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb3b1ac7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:40.156228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.156239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.156249 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:40.156281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.156301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a', 'dm-uuid-LVM-bglYLCxgkD3Qei681bqPmMF5XF5Cd1MSWl8BDXhbFTKiwBIAb3oEgAczEGV9LXaZ'], 'uuids': ['f2bf8803-d65d-44f0-ad5c-6b3f26298c9c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '99243621', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ']}})  2026-04-07 04:54:40.156313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc', 'scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0766011', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:40.156330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KbQcdi-US6m-bhDi-eJCV-lYyz-1b3q-6dXcPl', 'scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc', 'scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7a8fe78b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a']}})  2026-04-07 04:54:40.156342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.156353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.156364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 04:54:40.156382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.311447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i', 'dm-uuid-CRYPT-LUKS2-4ff33acd7a6c412b9d804fdff86f67b2-z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 04:54:40.311536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.311553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a', 'dm-uuid-LVM-Iy8rcFTCo5W5yRGOTreEEQjp17ko3Q41z5GT9DF2n3y0jUXUATRgcvWUva5Hkl5i'], 'uuids': ['4ff33acd-7a6c-412b-9d80-4fdff86f67b2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7a8fe78b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i']}})  2026-04-07 04:54:40.311583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kNGUrC-NTT1-tndE-pJPs-WGt9-udV7-3Eh5Id', 'scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539', 'scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99243621', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a']}})  2026-04-07 04:54:40.311594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.311627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca08a9c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:40.311669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.311686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.311697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ', 'dm-uuid-CRYPT-LUKS2-f2bf8803d65d44f0ad5c6b3f26298c9c-Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 04:54:40.311708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.311718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09', 'dm-uuid-LVM-bMsdwvKXiGbLYxQ2sqen2wd8SFVCxkJLQE7kiiwsLEGhL2FNSj6gPgLd2pZMGUoL'], 'uuids': ['7025c1bb-400d-47b2-a45c-5776ba2915d5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '62e8e967', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL']}})  2026-04-07 04:54:40.311744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc', 'scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4ea74e91', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:40.466404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-uFQjDD-6Vwu-b0Df-kkau-8GoO-290Z-GefUFg', 'scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c', 'scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cf020a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f']}})  2026-04-07 04:54:40.466503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.466520 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:40.466552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.466566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 04:54:40.466578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.466591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8', 'dm-uuid-CRYPT-LUKS2-ba89526f6e2c46628e82906f3c013265-9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 04:54:40.466622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.466652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f', 'dm-uuid-LVM-AwooBDvX7rFetLSgq1Ce0QV9OX4RcM369HiqdQcuvs1yzXwuZ5Vmxo8NwSkEMSV8'], 'uuids': ['ba89526f-6e2c-4662-8e82-906f3c013265'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cf020a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8']}})  2026-04-07 04:54:40.466666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-C8sbvR-d1U1-401x-XxcV-6mPF-9ypK-VoR24u', 'scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f', 'scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62e8e967', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09']}})  2026-04-07 04:54:40.466707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.466725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.466750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdec1fc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:40.604639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d', 'dm-uuid-LVM-70S3mOSclp5fTNOIhfFxohdLg5UX463GstIgbONbBmukx2iBeuHV5bIO1Eujm1WX'], 'uuids': ['699c8f07-94ac-4c9a-a0de-024156723f9a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '45504c97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX']}})  2026-04-07 04:54:40.604746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.604772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.604814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599', 'scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b27a0136', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:40.604832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL', 'dm-uuid-CRYPT-LUKS2-7025c1bb400d47b2a45c5776ba2915d5-QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 04:54:40.604846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UDOgFQ-qFyi-gVi2-LQBC-OZQf-u9TS-0kON4x', 'scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7', 'scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b14e5d9d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2']}})  2026-04-07 04:54:40.604879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.604964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.604978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 04:54:40.604990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.605008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8', 'dm-uuid-CRYPT-LUKS2-4e91966f3ea449a98c6c9031afa42b57-3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 04:54:40.605020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.605032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2', 'dm-uuid-LVM-T2EndjdOS29FjzC5jDtGOSk25DBRWo663ZaUiQtM2GjT3MT0SRy5sJS0UQzmoSs8'], 'uuids': ['4e91966f-3ea4-49a9-8c6c-9031afa42b57'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b14e5d9d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8']}})  2026-04-07 04:54:40.605052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ISxGDZ-smz1-74tU-v9PH-Tqzx-sLKc-qKqsod', 'scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d', 'scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45504c97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d']}})  2026-04-07 04:54:40.605071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.774655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2524aa84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:40.774756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.774794 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:40.774808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.774821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX', 'dm-uuid-CRYPT-LUKS2-699c8f0794ac4c9aa0de024156723f9a-stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 04:54:40.774835 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:40.774846 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.774877 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.774941 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.774955 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-24-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 04:54:40.774968 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.774979 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.775031 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:40.775057 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f80bc7fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part16', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part14', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part15', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part1', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:54:42.360764 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:42.360886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:54:42.360985 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:42.361001 | orchestrator | 2026-04-07 04:54:42.361013 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-07 04:54:42.361025 | orchestrator | Tuesday 07 April 2026 04:54:41 +0000 (0:00:02.439) 0:01:53.880 ********* 2026-04-07 04:54:42.361040 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.361092 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.361114 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.361127 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.361161 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.361180 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.361192 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.361221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cddfb89c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.361254 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481463 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481581 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481597 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481609 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481620 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481631 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481660 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481700 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '36ff44a1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481713 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.481736 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.794773 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:42.794877 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.794942 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.794957 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.794970 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.794983 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.794994 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.795067 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.795085 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb3b1ac7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.795099 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.795111 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.795140 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:42.795162 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978588 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a', 'dm-uuid-LVM-bglYLCxgkD3Qei681bqPmMF5XF5Cd1MSWl8BDXhbFTKiwBIAb3oEgAczEGV9LXaZ'], 'uuids': ['f2bf8803-d65d-44f0-ad5c-6b3f26298c9c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '99243621', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc', 'scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0766011', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KbQcdi-US6m-bhDi-eJCV-lYyz-1b3q-6dXcPl', 'scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc', 'scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7a8fe78b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978845 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:42.978859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i', 'dm-uuid-CRYPT-LUKS2-4ff33acd7a6c412b9d804fdff86f67b2-z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a', 'dm-uuid-LVM-Iy8rcFTCo5W5yRGOTreEEQjp17ko3Q41z5GT9DF2n3y0jUXUATRgcvWUva5Hkl5i'], 'uuids': ['4ff33acd-7a6c-412b-9d80-4fdff86f67b2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7a8fe78b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:42.978981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kNGUrC-NTT1-tndE-pJPs-WGt9-udV7-3Eh5Id', 'scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539', 'scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99243621', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.094694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.094792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.094829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca08a9c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.094883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09', 'dm-uuid-LVM-bMsdwvKXiGbLYxQ2sqen2wd8SFVCxkJLQE7kiiwsLEGhL2FNSj6gPgLd2pZMGUoL'], 'uuids': ['7025c1bb-400d-47b2-a45c-5776ba2915d5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '62e8e967', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.094962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.094982 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc', 'scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4ea74e91', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.094995 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.095021 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-uFQjDD-6Vwu-b0Df-kkau-8GoO-290Z-GefUFg', 'scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c', 'scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cf020a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.095035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.095056 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ', 'dm-uuid-CRYPT-LUKS2-f2bf8803d65d44f0ad5c6b3f26298c9c-Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182160 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8', 'dm-uuid-CRYPT-LUKS2-ba89526f6e2c46628e82906f3c013265-9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182335 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182365 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f', 'dm-uuid-LVM-AwooBDvX7rFetLSgq1Ce0QV9OX4RcM369HiqdQcuvs1yzXwuZ5Vmxo8NwSkEMSV8'], 'uuids': ['ba89526f-6e2c-4662-8e82-906f3c013265'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cf020a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-C8sbvR-d1U1-401x-XxcV-6mPF-9ypK-VoR24u', 'scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f', 'scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62e8e967', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182400 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdec1fc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.182441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329005 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL', 'dm-uuid-CRYPT-LUKS2-7025c1bb400d47b2a45c5776ba2915d5-QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329134 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d', 'dm-uuid-LVM-70S3mOSclp5fTNOIhfFxohdLg5UX463GstIgbONbBmukx2iBeuHV5bIO1Eujm1WX'], 'uuids': ['699c8f07-94ac-4c9a-a0de-024156723f9a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '45504c97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329144 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599', 'scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b27a0136', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329169 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UDOgFQ-qFyi-gVi2-LQBC-OZQf-u9TS-0kON4x', 'scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7', 'scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b14e5d9d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329195 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:43.329214 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329223 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:43.329231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329239 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329247 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8', 'dm-uuid-CRYPT-LUKS2-4e91966f3ea449a98c6c9031afa42b57-3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.329265 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389201 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389302 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389340 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389352 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2', 'dm-uuid-LVM-T2EndjdOS29FjzC5jDtGOSk25DBRWo663ZaUiQtM2GjT3MT0SRy5sJS0UQzmoSs8'], 'uuids': ['4e91966f-3ea4-49a9-8c6c-9031afa42b57'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b14e5d9d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389363 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-24-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ISxGDZ-smz1-74tU-v9PH-Tqzx-sLKc-qKqsod', 'scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d', 'scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45504c97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d']}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389419 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389432 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389441 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:43.389457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2524aa84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:55.106505 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:55.106641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:55.106662 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f80bc7fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part16', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part14', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part15', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part1', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:55.106759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:55.106777 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:55.106796 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX', 'dm-uuid-CRYPT-LUKS2-699c8f0794ac4c9aa0de024156723f9a-stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:55.106809 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:54:55.106822 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:55.106853 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:55.106876 | orchestrator | 2026-04-07 04:54:55.106889 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-07 04:54:55.106902 | orchestrator | Tuesday 07 April 2026 04:54:44 +0000 (0:00:02.815) 0:01:56.696 ********* 2026-04-07 04:54:55.106966 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:55.106979 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:55.106989 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:55.107000 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:55.107011 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:55.107022 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:55.107032 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:55.107043 | orchestrator | 2026-04-07 04:54:55.107054 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-07 04:54:55.107065 | orchestrator | Tuesday 07 April 2026 04:54:47 +0000 (0:00:02.763) 0:01:59.459 ********* 2026-04-07 04:54:55.107076 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:55.107087 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:55.107098 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:55.107108 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:55.107119 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:55.107129 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:55.107140 | orchestrator | ok: [testbed-manager] 2026-04-07 04:54:55.107151 | orchestrator | 2026-04-07 04:54:55.107162 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 04:54:55.107173 | orchestrator | Tuesday 07 April 2026 04:54:49 +0000 (0:00:02.019) 0:02:01.479 ********* 2026-04-07 04:54:55.107184 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:54:55.107195 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:54:55.107205 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:54:55.107216 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:54:55.107227 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:55.107238 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:54:55.107249 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:54:55.107260 | orchestrator | 2026-04-07 04:54:55.107271 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 04:54:55.107282 | orchestrator | Tuesday 07 April 2026 04:54:51 +0000 (0:00:02.470) 0:02:03.949 ********* 2026-04-07 04:54:55.107293 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:55.107303 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:55.107314 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:55.107325 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:55.107336 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:54:55.107347 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:54:55.107357 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:54:55.107368 | orchestrator | 2026-04-07 04:54:55.107379 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 04:54:55.107390 | orchestrator | Tuesday 07 April 2026 04:54:54 +0000 (0:00:02.037) 0:02:05.987 ********* 2026-04-07 04:54:55.107401 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:54:55.107412 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:54:55.107422 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:54:55.107433 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:54:55.107452 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:55:22.218342 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:55:22.218436 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-04-07 04:55:22.218447 | orchestrator | 2026-04-07 04:55:22.218456 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 04:55:22.218464 | orchestrator | Tuesday 07 April 2026 04:54:56 +0000 (0:00:02.705) 0:02:08.693 ********* 2026-04-07 04:55:22.218471 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:55:22.218478 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:55:22.218485 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:55:22.218492 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:55:22.218499 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:55:22.218505 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:55:22.218512 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:55:22.218518 | orchestrator | 2026-04-07 04:55:22.218526 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 04:55:22.218553 | orchestrator | Tuesday 07 April 2026 04:54:58 +0000 (0:00:01.873) 0:02:10.566 ********* 2026-04-07 04:55:22.218574 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:55:22.218582 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-07 04:55:22.218588 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-07 04:55:22.218595 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 04:55:22.218601 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-07 04:55:22.218608 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-07 04:55:22.218615 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-07 04:55:22.218621 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 04:55:22.218628 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-07 04:55:22.218634 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-07 04:55:22.218642 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-07 04:55:22.218648 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-07 04:55:22.218655 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-07 04:55:22.218662 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-07 04:55:22.218668 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-07 04:55:22.218674 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-07 04:55:22.218681 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-07 04:55:22.218688 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-07 04:55:22.218694 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-07 04:55:22.218701 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-07 04:55:22.218707 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-07 04:55:22.218714 | orchestrator | 2026-04-07 04:55:22.218720 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 04:55:22.218727 | orchestrator | Tuesday 07 April 2026 04:55:01 +0000 (0:00:03.402) 0:02:13.968 ********* 2026-04-07 04:55:22.218734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 04:55:22.218741 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 04:55:22.218747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 04:55:22.218787 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:55:22.218796 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-07 04:55:22.218803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-07 04:55:22.218811 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-07 04:55:22.218818 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:55:22.218825 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-07 04:55:22.218832 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-07 04:55:22.218839 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-07 04:55:22.218846 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:55:22.218853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 04:55:22.218859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 04:55:22.218866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 04:55:22.218873 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:55:22.218880 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-07 04:55:22.218888 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-07 04:55:22.218894 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-07 04:55:22.218901 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:55:22.218908 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-07 04:55:22.218916 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-07 04:55:22.218929 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-07 04:55:22.218952 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:55:22.218959 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-07 04:55:22.218967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-07 04:55:22.218974 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-07 04:55:22.218981 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:55:22.218988 | orchestrator | 2026-04-07 04:55:22.218996 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-07 04:55:22.219003 | orchestrator | Tuesday 07 April 2026 04:55:04 +0000 (0:00:02.025) 0:02:15.994 ********* 2026-04-07 04:55:22.219010 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:55:22.219017 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:55:22.219024 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:55:22.219031 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:55:22.219053 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 04:55:22.219061 | orchestrator | 2026-04-07 04:55:22.219069 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 04:55:22.219076 | orchestrator | Tuesday 07 April 2026 04:55:05 +0000 (0:00:01.936) 0:02:17.931 ********* 2026-04-07 04:55:22.219084 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:55:22.219090 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:55:22.219098 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:55:22.219104 | orchestrator | 2026-04-07 04:55:22.219111 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 04:55:22.219118 | orchestrator | Tuesday 07 April 2026 04:55:07 +0000 (0:00:01.281) 0:02:19.213 ********* 2026-04-07 04:55:22.219125 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:55:22.219132 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:55:22.219139 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:55:22.219145 | orchestrator | 2026-04-07 04:55:22.219153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 04:55:22.219164 | orchestrator | Tuesday 07 April 2026 04:55:08 +0000 (0:00:01.319) 0:02:20.532 ********* 2026-04-07 04:55:22.219171 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:55:22.219178 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:55:22.219186 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:55:22.219192 | orchestrator | 2026-04-07 04:55:22.219199 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 04:55:22.219207 | orchestrator | Tuesday 07 April 2026 04:55:09 +0000 (0:00:01.314) 0:02:21.847 ********* 2026-04-07 04:55:22.219214 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:55:22.219221 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:55:22.219228 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:55:22.219234 | orchestrator | 2026-04-07 04:55:22.219241 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 04:55:22.219248 | orchestrator | Tuesday 07 April 2026 04:55:11 +0000 (0:00:01.353) 0:02:23.200 ********* 2026-04-07 04:55:22.219254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 04:55:22.219261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 04:55:22.219268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 04:55:22.219274 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:55:22.219281 | orchestrator | 2026-04-07 04:55:22.219288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 04:55:22.219295 | orchestrator | Tuesday 07 April 2026 04:55:12 +0000 (0:00:01.369) 0:02:24.570 ********* 2026-04-07 04:55:22.219302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 04:55:22.219309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 04:55:22.219321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 04:55:22.219328 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:55:22.219335 | orchestrator | 2026-04-07 04:55:22.219341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 04:55:22.219348 | orchestrator | Tuesday 07 April 2026 04:55:14 +0000 (0:00:01.682) 0:02:26.252 ********* 2026-04-07 04:55:22.219355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 04:55:22.219361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 04:55:22.219368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 04:55:22.219375 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:55:22.219381 | orchestrator | 2026-04-07 04:55:22.219388 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 04:55:22.219395 | orchestrator | Tuesday 07 April 2026 04:55:16 +0000 (0:00:01.913) 0:02:28.165 ********* 2026-04-07 04:55:22.219401 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:55:22.219407 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:55:22.219414 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:55:22.219421 | orchestrator | 2026-04-07 04:55:22.219427 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 04:55:22.219434 | orchestrator | Tuesday 07 April 2026 04:55:17 +0000 (0:00:01.725) 0:02:29.891 ********* 2026-04-07 04:55:22.219440 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 04:55:22.219447 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-07 04:55:22.219453 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-07 04:55:22.219460 | orchestrator | 2026-04-07 04:55:22.219467 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-07 04:55:22.219473 | orchestrator | Tuesday 07 April 2026 04:55:19 +0000 (0:00:01.556) 0:02:31.448 ********* 2026-04-07 04:55:22.219480 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:55:22.219487 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:55:22.219494 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:55:22.219501 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 04:55:22.219507 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 04:55:22.219514 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 04:55:22.219520 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 04:55:22.219526 | orchestrator | 2026-04-07 04:55:22.219531 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-07 04:55:22.219537 | orchestrator | Tuesday 07 April 2026 04:55:21 +0000 (0:00:01.816) 0:02:33.264 ********* 2026-04-07 04:55:22.219542 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:55:22.219548 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:55:22.219554 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:55:22.219565 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 04:56:08.560456 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 04:56:08.560573 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 04:56:08.560590 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 04:56:08.560603 | orchestrator | 2026-04-07 04:56:08.560617 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-04-07 04:56:08.560629 | orchestrator | Tuesday 07 April 2026 04:55:24 +0000 (0:00:03.240) 0:02:36.505 ********* 2026-04-07 04:56:08.560641 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:56:08.560654 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:56:08.560686 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:56:08.560698 | orchestrator | changed: [testbed-manager] 2026-04-07 04:56:08.560710 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:56:08.560722 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:56:08.560748 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:56:08.560759 | orchestrator | 2026-04-07 04:56:08.560772 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-04-07 04:56:08.560783 | orchestrator | Tuesday 07 April 2026 04:55:33 +0000 (0:00:08.525) 0:02:45.031 ********* 2026-04-07 04:56:08.560794 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.560806 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.560817 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.560828 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.560839 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.560850 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.560861 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.560873 | orchestrator | 2026-04-07 04:56:08.560884 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-04-07 04:56:08.560895 | orchestrator | Tuesday 07 April 2026 04:55:35 +0000 (0:00:02.062) 0:02:47.093 ********* 2026-04-07 04:56:08.560906 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.560918 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.560929 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.560940 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.560951 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.560962 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.560973 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.560984 | orchestrator | 2026-04-07 04:56:08.561044 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-04-07 04:56:08.561067 | orchestrator | Tuesday 07 April 2026 04:55:37 +0000 (0:00:02.025) 0:02:49.119 ********* 2026-04-07 04:56:08.561087 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.561105 | orchestrator | changed: [testbed-node-2] 2026-04-07 04:56:08.561118 | orchestrator | changed: [testbed-node-1] 2026-04-07 04:56:08.561131 | orchestrator | changed: [testbed-node-0] 2026-04-07 04:56:08.561145 | orchestrator | changed: [testbed-node-3] 2026-04-07 04:56:08.561158 | orchestrator | changed: [testbed-node-4] 2026-04-07 04:56:08.561171 | orchestrator | changed: [testbed-node-5] 2026-04-07 04:56:08.561184 | orchestrator | 2026-04-07 04:56:08.561197 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-04-07 04:56:08.561211 | orchestrator | Tuesday 07 April 2026 04:55:40 +0000 (0:00:02.966) 0:02:52.085 ********* 2026-04-07 04:56:08.561225 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-07 04:56:08.561240 | orchestrator | 2026-04-07 04:56:08.561254 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-04-07 04:56:08.561267 | orchestrator | Tuesday 07 April 2026 04:55:43 +0000 (0:00:02.957) 0:02:55.043 ********* 2026-04-07 04:56:08.561280 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.561294 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.561307 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.561320 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.561333 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.561346 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.561357 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.561368 | orchestrator | 2026-04-07 04:56:08.561379 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-04-07 04:56:08.561389 | orchestrator | Tuesday 07 April 2026 04:55:45 +0000 (0:00:02.076) 0:02:57.120 ********* 2026-04-07 04:56:08.561400 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.561411 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.561432 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.561443 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.561454 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.561464 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.561475 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.561486 | orchestrator | 2026-04-07 04:56:08.561497 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-04-07 04:56:08.561508 | orchestrator | Tuesday 07 April 2026 04:55:47 +0000 (0:00:02.150) 0:02:59.270 ********* 2026-04-07 04:56:08.561518 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.561529 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.561540 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.561550 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.561561 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.561571 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.561582 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.561593 | orchestrator | 2026-04-07 04:56:08.561603 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-04-07 04:56:08.561614 | orchestrator | Tuesday 07 April 2026 04:55:49 +0000 (0:00:01.989) 0:03:01.259 ********* 2026-04-07 04:56:08.561625 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.561636 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.561646 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.561658 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.561668 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.561679 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.561690 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.561701 | orchestrator | 2026-04-07 04:56:08.561730 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-04-07 04:56:08.561742 | orchestrator | Tuesday 07 April 2026 04:55:51 +0000 (0:00:02.222) 0:03:03.482 ********* 2026-04-07 04:56:08.561753 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.561763 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.561774 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.561785 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.561795 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.561806 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.561817 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.561827 | orchestrator | 2026-04-07 04:56:08.561838 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-04-07 04:56:08.561849 | orchestrator | Tuesday 07 April 2026 04:55:53 +0000 (0:00:01.943) 0:03:05.426 ********* 2026-04-07 04:56:08.561860 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.561870 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.561881 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.561898 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.561909 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.561919 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.561930 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.561941 | orchestrator | 2026-04-07 04:56:08.561952 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-04-07 04:56:08.561963 | orchestrator | Tuesday 07 April 2026 04:55:55 +0000 (0:00:02.128) 0:03:07.554 ********* 2026-04-07 04:56:08.561973 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.561984 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.562011 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.562086 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.562098 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.562109 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.562120 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.562163 | orchestrator | 2026-04-07 04:56:08.562175 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-04-07 04:56:08.562196 | orchestrator | Tuesday 07 April 2026 04:55:57 +0000 (0:00:01.938) 0:03:09.493 ********* 2026-04-07 04:56:08.562207 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.562218 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.562229 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.562240 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.562250 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.562261 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.562272 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.562283 | orchestrator | 2026-04-07 04:56:08.562294 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-04-07 04:56:08.562305 | orchestrator | Tuesday 07 April 2026 04:55:59 +0000 (0:00:02.196) 0:03:11.689 ********* 2026-04-07 04:56:08.562316 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.562326 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.562337 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.562348 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.562359 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.562370 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.562381 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.562392 | orchestrator | 2026-04-07 04:56:08.562403 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-04-07 04:56:08.562414 | orchestrator | Tuesday 07 April 2026 04:56:01 +0000 (0:00:02.184) 0:03:13.874 ********* 2026-04-07 04:56:08.562424 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.562435 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.562446 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.562457 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.562468 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.562478 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.562489 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.562500 | orchestrator | 2026-04-07 04:56:08.562511 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-04-07 04:56:08.562522 | orchestrator | Tuesday 07 April 2026 04:56:03 +0000 (0:00:01.877) 0:03:15.752 ********* 2026-04-07 04:56:08.562532 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.562543 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.562554 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.562565 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.562575 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.562586 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.562597 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.562608 | orchestrator | 2026-04-07 04:56:08.562619 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-04-07 04:56:08.562630 | orchestrator | Tuesday 07 April 2026 04:56:05 +0000 (0:00:02.063) 0:03:17.816 ********* 2026-04-07 04:56:08.562640 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.562651 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.562662 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.562673 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.562684 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:08.562695 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:08.562705 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:08.562716 | orchestrator | 2026-04-07 04:56:08.562728 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-04-07 04:56:08.562739 | orchestrator | Tuesday 07 April 2026 04:56:07 +0000 (0:00:01.954) 0:03:19.770 ********* 2026-04-07 04:56:08.562750 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:08.562761 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:08.562772 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:08.562784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 04:56:08.562805 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 04:56:08.562816 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:08.562836 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 04:56:32.291371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 04:56:32.291480 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.291493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 04:56:32.291500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 04:56:32.291519 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.291526 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:32.291532 | orchestrator | 2026-04-07 04:56:32.291540 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-04-07 04:56:32.291548 | orchestrator | Tuesday 07 April 2026 04:56:09 +0000 (0:00:02.068) 0:03:21.839 ********* 2026-04-07 04:56:32.291554 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:32.291560 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:32.291567 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:32.291573 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.291579 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.291585 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.291592 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:32.291598 | orchestrator | 2026-04-07 04:56:32.291604 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-04-07 04:56:32.291610 | orchestrator | Tuesday 07 April 2026 04:56:11 +0000 (0:00:01.920) 0:03:23.760 ********* 2026-04-07 04:56:32.291617 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:32.291623 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:32.291629 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:32.291635 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.291641 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.291648 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.291654 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:32.291660 | orchestrator | 2026-04-07 04:56:32.291666 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-04-07 04:56:32.291673 | orchestrator | Tuesday 07 April 2026 04:56:14 +0000 (0:00:02.226) 0:03:25.986 ********* 2026-04-07 04:56:32.291679 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:32.291685 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:32.291701 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:32.291707 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.291721 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.291727 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.291734 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:32.291740 | orchestrator | 2026-04-07 04:56:32.291746 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-04-07 04:56:32.291752 | orchestrator | Tuesday 07 April 2026 04:56:16 +0000 (0:00:02.066) 0:03:28.053 ********* 2026-04-07 04:56:32.291758 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:32.291765 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:32.291771 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:32.291777 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.291784 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.291790 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.291796 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:32.291817 | orchestrator | 2026-04-07 04:56:32.291823 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-04-07 04:56:32.291830 | orchestrator | Tuesday 07 April 2026 04:56:18 +0000 (0:00:02.242) 0:03:30.295 ********* 2026-04-07 04:56:32.291836 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:32.291842 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:32.291848 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:32.291854 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.291860 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.291866 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.291872 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:32.291878 | orchestrator | 2026-04-07 04:56:32.291884 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-04-07 04:56:32.291890 | orchestrator | Tuesday 07 April 2026 04:56:20 +0000 (0:00:02.178) 0:03:32.474 ********* 2026-04-07 04:56:32.291896 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:32.291903 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:32.291909 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:32.291915 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.291921 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.291927 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.291933 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:32.291940 | orchestrator | 2026-04-07 04:56:32.291947 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-04-07 04:56:32.291954 | orchestrator | Tuesday 07 April 2026 04:56:22 +0000 (0:00:01.841) 0:03:34.315 ********* 2026-04-07 04:56:32.291962 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:32.291969 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:32.291976 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:32.291983 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:32.291991 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 04:56:32.291999 | orchestrator | 2026-04-07 04:56:32.292006 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-04-07 04:56:32.292014 | orchestrator | Tuesday 07 April 2026 04:56:24 +0000 (0:00:02.397) 0:03:36.712 ********* 2026-04-07 04:56:32.292033 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:56:32.292041 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:56:32.292048 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:56:32.292056 | orchestrator | 2026-04-07 04:56:32.292063 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-04-07 04:56:32.292070 | orchestrator | Tuesday 07 April 2026 04:56:26 +0000 (0:00:01.402) 0:03:38.115 ********* 2026-04-07 04:56:32.292090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 04:56:32.292098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 04:56:32.292105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 04:56:32.292116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 04:56:32.292123 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.292131 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.292139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 04:56:32.292146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 04:56:32.292160 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.292167 | orchestrator | 2026-04-07 04:56:32.292174 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-04-07 04:56:32.292182 | orchestrator | Tuesday 07 April 2026 04:56:27 +0000 (0:00:01.470) 0:03:39.586 ********* 2026-04-07 04:56:32.292191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:32.292200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:32.292207 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.292215 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:32.292223 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:32.292231 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.292238 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:32.292245 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:32.292251 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.292258 | orchestrator | 2026-04-07 04:56:32.292264 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-04-07 04:56:32.292270 | orchestrator | Tuesday 07 April 2026 04:56:29 +0000 (0:00:01.698) 0:03:41.284 ********* 2026-04-07 04:56:32.292277 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.292283 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.292289 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.292295 | orchestrator | 2026-04-07 04:56:32.292301 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-04-07 04:56:32.292307 | orchestrator | Tuesday 07 April 2026 04:56:30 +0000 (0:00:01.350) 0:03:42.635 ********* 2026-04-07 04:56:32.292314 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.292320 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:32.292326 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:32.292332 | orchestrator | 2026-04-07 04:56:32.292338 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-04-07 04:56:32.292344 | orchestrator | Tuesday 07 April 2026 04:56:32 +0000 (0:00:01.387) 0:03:44.023 ********* 2026-04-07 04:56:32.292350 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:32.292361 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:37.381914 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:37.382140 | orchestrator | 2026-04-07 04:56:37.382196 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-04-07 04:56:37.382211 | orchestrator | Tuesday 07 April 2026 04:56:33 +0000 (0:00:01.354) 0:03:45.377 ********* 2026-04-07 04:56:37.382222 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:37.382234 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:37.382245 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:37.382256 | orchestrator | 2026-04-07 04:56:37.382268 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-04-07 04:56:37.382279 | orchestrator | Tuesday 07 April 2026 04:56:34 +0000 (0:00:01.308) 0:03:46.685 ********* 2026-04-07 04:56:37.382306 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}) 2026-04-07 04:56:37.382320 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}) 2026-04-07 04:56:37.382331 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}) 2026-04-07 04:56:37.382342 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}) 2026-04-07 04:56:37.382353 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}) 2026-04-07 04:56:37.382363 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}) 2026-04-07 04:56:37.382374 | orchestrator | 2026-04-07 04:56:37.382385 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-04-07 04:56:37.382397 | orchestrator | Tuesday 07 April 2026 04:56:37 +0000 (0:00:02.362) 0:03:49.047 ********* 2026-04-07 04:56:37.382414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-44abcd21-31e3-595d-ad07-7c010500a60a/osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1775529860.3392444, 'mtime': 1775529860.3372443, 'ctime': 1775529860.3372443, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-44abcd21-31e3-595d-ad07-7c010500a60a/osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:37.382457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a/osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1775529881.3695838, 'mtime': 1775529881.3645837, 'ctime': 1775529881.3645837, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a/osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:37.382483 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:37.382503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ccafa0da-42f8-5022-b95e-1902d46c646f/osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 954, 'dev': 6, 'nlink': 1, 'atime': 1775529862.5629697, 'mtime': 1775529862.5579696, 'ctime': 1775529862.5579696, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ccafa0da-42f8-5022-b95e-1902d46c646f/osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:37.382518 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8941099b-00de-50f1-81f7-f26159704c09/osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 964, 'dev': 6, 'nlink': 1, 'atime': 1775529881.0692647, 'mtime': 1775529881.0652645, 'ctime': 1775529881.0652645, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8941099b-00de-50f1-81f7-f26159704c09/osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:37.382532 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:37.382553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-754aebfc-d76c-537f-941d-8ad36483cdb2/osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 949, 'dev': 6, 'nlink': 1, 'atime': 1775529862.2035186, 'mtime': 1775529862.1985185, 'ctime': 1775529862.1985185, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-754aebfc-d76c-537f-941d-8ad36483cdb2/osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.275857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d/osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 959, 'dev': 6, 'nlink': 1, 'atime': 1775529880.8148184, 'mtime': 1775529880.8108182, 'ctime': 1775529880.8108182, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d/osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.275973 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:43.275991 | orchestrator | 2026-04-07 04:56:43.276004 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-04-07 04:56:43.276016 | orchestrator | Tuesday 07 April 2026 04:56:38 +0000 (0:00:01.432) 0:03:50.480 ********* 2026-04-07 04:56:43.276029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 04:56:43.276087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 04:56:43.276099 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:43.276110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 04:56:43.276121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 04:56:43.276132 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:43.276144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 04:56:43.276155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 04:56:43.276166 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:43.276177 | orchestrator | 2026-04-07 04:56:43.276189 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-04-07 04:56:43.276201 | orchestrator | Tuesday 07 April 2026 04:56:39 +0000 (0:00:01.400) 0:03:51.881 ********* 2026-04-07 04:56:43.276215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276264 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:43.276275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276305 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276316 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:43.276328 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276357 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:43.276370 | orchestrator | 2026-04-07 04:56:43.276383 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-04-07 04:56:43.276397 | orchestrator | Tuesday 07 April 2026 04:56:41 +0000 (0:00:01.405) 0:03:53.286 ********* 2026-04-07 04:56:43.276410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 04:56:43.276423 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 04:56:43.276436 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:43.276449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 04:56:43.276462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 04:56:43.276476 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:43.276489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 04:56:43.276500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 04:56:43.276511 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:43.276522 | orchestrator | 2026-04-07 04:56:43.276533 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-04-07 04:56:43.276544 | orchestrator | Tuesday 07 April 2026 04:56:42 +0000 (0:00:01.628) 0:03:54.915 ********* 2026-04-07 04:56:43.276564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276576 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276587 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:43.276598 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276609 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276620 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:43.276631 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:43.276649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}, 'ansible_loop_var': 'item'})  2026-04-07 04:56:54.028532 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:54.028648 | orchestrator | 2026-04-07 04:56:54.028666 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-04-07 04:56:54.028680 | orchestrator | Tuesday 07 April 2026 04:56:44 +0000 (0:00:01.462) 0:03:56.378 ********* 2026-04-07 04:56:54.028691 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:54.028703 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:54.028729 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:54.028741 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:54.028752 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:54.028763 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:54.028774 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:54.028785 | orchestrator | 2026-04-07 04:56:54.028797 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-04-07 04:56:54.028808 | orchestrator | Tuesday 07 April 2026 04:56:46 +0000 (0:00:01.955) 0:03:58.334 ********* 2026-04-07 04:56:54.028820 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:54.028831 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:54.028842 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:56:54.028853 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:56:54.028864 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 04:56:54.028876 | orchestrator | 2026-04-07 04:56:54.028887 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-04-07 04:56:54.028898 | orchestrator | Tuesday 07 April 2026 04:56:48 +0000 (0:00:02.472) 0:04:00.806 ********* 2026-04-07 04:56:54.028910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.028940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.028952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.028964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.028974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.028986 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:54.028997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029084 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:54.029102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029166 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:54.029185 | orchestrator | 2026-04-07 04:56:54.029204 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-04-07 04:56:54.029222 | orchestrator | Tuesday 07 April 2026 04:56:50 +0000 (0:00:01.439) 0:04:02.246 ********* 2026-04-07 04:56:54.029241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029380 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:54.029403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029511 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:54.029530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029626 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:54.029639 | orchestrator | 2026-04-07 04:56:54.029650 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-04-07 04:56:54.029662 | orchestrator | Tuesday 07 April 2026 04:56:52 +0000 (0:00:01.870) 0:04:04.116 ********* 2026-04-07 04:56:54.029673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029727 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:56:54.029738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029792 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:56:54.029803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 04:56:54.029878 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:56:54.029889 | orchestrator | 2026-04-07 04:56:54.029900 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-04-07 04:56:54.029911 | orchestrator | Tuesday 07 April 2026 04:56:53 +0000 (0:00:01.567) 0:04:05.684 ********* 2026-04-07 04:56:54.029922 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:56:54.029933 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:56:54.029954 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:08.973607 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:08.973715 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:08.973730 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:08.973741 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:08.973751 | orchestrator | 2026-04-07 04:57:08.973763 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-04-07 04:57:08.973789 | orchestrator | Tuesday 07 April 2026 04:56:55 +0000 (0:00:01.865) 0:04:07.549 ********* 2026-04-07 04:57:08.973799 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:08.973810 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:08.973820 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:08.973830 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:08.973840 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:08.973850 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:08.973860 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:08.973870 | orchestrator | 2026-04-07 04:57:08.973880 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-04-07 04:57:08.973890 | orchestrator | Tuesday 07 April 2026 04:56:57 +0000 (0:00:02.141) 0:04:09.691 ********* 2026-04-07 04:57:08.973900 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:08.973909 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:08.973919 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:08.973929 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:08.973939 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:08.973948 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:08.973958 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:08.973968 | orchestrator | 2026-04-07 04:57:08.973978 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-04-07 04:57:08.973989 | orchestrator | Tuesday 07 April 2026 04:56:59 +0000 (0:00:02.192) 0:04:11.884 ********* 2026-04-07 04:57:08.973998 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:08.974008 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:08.974100 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:08.974111 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:08.974121 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:08.974131 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:08.974141 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:08.974153 | orchestrator | 2026-04-07 04:57:08.974165 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-04-07 04:57:08.974178 | orchestrator | Tuesday 07 April 2026 04:57:01 +0000 (0:00:01.910) 0:04:13.794 ********* 2026-04-07 04:57:08.974189 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:08.974202 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:08.974213 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:08.974224 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:08.974235 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:08.974247 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:08.974258 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:08.974269 | orchestrator | 2026-04-07 04:57:08.974281 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-04-07 04:57:08.974315 | orchestrator | Tuesday 07 April 2026 04:57:03 +0000 (0:00:02.151) 0:04:15.945 ********* 2026-04-07 04:57:08.974327 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:08.974338 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:08.974350 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:08.974361 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:08.974372 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:08.974383 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:08.974395 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:08.974407 | orchestrator | 2026-04-07 04:57:08.974419 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-04-07 04:57:08.974430 | orchestrator | Tuesday 07 April 2026 04:57:05 +0000 (0:00:01.939) 0:04:17.885 ********* 2026-04-07 04:57:08.974442 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:08.974454 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:08.974465 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:08.974477 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:08.974489 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:08.974500 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:08.974512 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:08.974523 | orchestrator | 2026-04-07 04:57:08.974533 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-04-07 04:57:08.974543 | orchestrator | Tuesday 07 April 2026 04:57:08 +0000 (0:00:02.202) 0:04:20.087 ********* 2026-04-07 04:57:08.974554 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:08.974565 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:08.974577 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:08.974588 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:08.974598 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:08.974610 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:08.974620 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:08.974646 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:08.974662 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:08.974672 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:08.974682 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:08.974692 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:08.974702 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:08.974720 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:08.974729 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:08.974739 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:08.974749 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:08.974758 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:08.974768 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:08.974778 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:08.974788 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:08.974797 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:08.974807 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:08.974817 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:08.974826 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:08.974836 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:08.974846 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:08.974856 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:08.974865 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:08.974875 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:08.974891 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:13.240393 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:13.240531 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:13.240570 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:13.240613 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:13.240629 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:13.240640 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:13.240651 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:13.240664 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:13.240675 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:13.240686 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:13.240697 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:13.240709 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:13.240720 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:13.240731 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:13.240742 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:13.240753 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:13.240764 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:13.240774 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:13.240786 | orchestrator | 2026-04-07 04:57:13.240798 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-04-07 04:57:13.240811 | orchestrator | Tuesday 07 April 2026 04:57:10 +0000 (0:00:02.229) 0:04:22.317 ********* 2026-04-07 04:57:13.240822 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:13.240833 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:13.240844 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:13.240855 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:13.240865 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:13.240876 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:13.240888 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:13.240901 | orchestrator | 2026-04-07 04:57:13.240915 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-04-07 04:57:13.240928 | orchestrator | Tuesday 07 April 2026 04:57:12 +0000 (0:00:02.117) 0:04:24.434 ********* 2026-04-07 04:57:13.240941 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:13.240955 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:13.240977 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:13.240990 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:13.241021 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:13.241034 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:13.241045 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:13.241056 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:13.241067 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:13.241099 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:13.241110 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:13.241157 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:13.241170 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:13.241181 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:13.241192 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:13.241203 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:13.241214 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:13.241225 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:13.241236 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:13.241247 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:13.241258 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:13.241269 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:13.241279 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:13.241302 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:13.241313 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:13.241324 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:13.241335 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:13.241347 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:13.241367 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:40.339174 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:40.339294 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:40.339312 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:40.339325 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:40.339338 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:40.339350 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:40.339361 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:40.339374 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 04:57:40.339385 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 04:57:40.339396 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 04:57:40.339407 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:40.339418 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 04:57:40.339429 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:40.339440 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:40.339475 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:40.339487 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:40.339498 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:40.339509 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 04:57:40.339520 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 04:57:40.339530 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:40.339542 | orchestrator | 2026-04-07 04:57:40.339555 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-04-07 04:57:40.339567 | orchestrator | Tuesday 07 April 2026 04:57:14 +0000 (0:00:02.166) 0:04:26.602 ********* 2026-04-07 04:57:40.339578 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:40.339588 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:40.339599 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:40.339610 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:40.339621 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:40.339632 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:40.339643 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:40.339656 | orchestrator | 2026-04-07 04:57:40.339671 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-04-07 04:57:40.339684 | orchestrator | Tuesday 07 April 2026 04:57:16 +0000 (0:00:02.117) 0:04:28.719 ********* 2026-04-07 04:57:40.339696 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:40.339709 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:40.339722 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:40.339734 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:40.339747 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:40.339760 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:40.339772 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:40.339785 | orchestrator | 2026-04-07 04:57:40.339798 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-04-07 04:57:40.339830 | orchestrator | Tuesday 07 April 2026 04:57:18 +0000 (0:00:02.094) 0:04:30.814 ********* 2026-04-07 04:57:40.339843 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:40.339862 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:40.339875 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:40.339888 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:40.339899 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:40.339911 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:40.339924 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:40.339936 | orchestrator | 2026-04-07 04:57:40.339949 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-07 04:57:40.339962 | orchestrator | Tuesday 07 April 2026 04:57:21 +0000 (0:00:02.308) 0:04:33.123 ********* 2026-04-07 04:57:40.339975 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-07 04:57:40.339990 | orchestrator | 2026-04-07 04:57:40.340003 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-04-07 04:57:40.340017 | orchestrator | Tuesday 07 April 2026 04:57:23 +0000 (0:00:02.683) 0:04:35.806 ********* 2026-04-07 04:57:40.340036 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 04:57:40.340057 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 04:57:40.340085 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 04:57:40.340128 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 04:57:40.340147 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 04:57:40.340170 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 04:57:40.340195 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 04:57:40.340212 | orchestrator | 2026-04-07 04:57:40.340229 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-04-07 04:57:40.340247 | orchestrator | Tuesday 07 April 2026 04:57:25 +0000 (0:00:02.106) 0:04:37.913 ********* 2026-04-07 04:57:40.340262 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:40.340278 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:40.340296 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:40.340312 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:40.340330 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:40.340347 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:40.340364 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:40.340382 | orchestrator | 2026-04-07 04:57:40.340400 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-04-07 04:57:40.340417 | orchestrator | Tuesday 07 April 2026 04:57:28 +0000 (0:00:02.234) 0:04:40.148 ********* 2026-04-07 04:57:40.340436 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:40.340454 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:40.340472 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:40.340484 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:40.340494 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:40.340505 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:40.340516 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:40.340526 | orchestrator | 2026-04-07 04:57:40.340537 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-04-07 04:57:40.340548 | orchestrator | Tuesday 07 April 2026 04:57:30 +0000 (0:00:01.926) 0:04:42.074 ********* 2026-04-07 04:57:40.340559 | orchestrator | ok: [testbed-node-1] 2026-04-07 04:57:40.340570 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:57:40.340581 | orchestrator | ok: [testbed-node-2] 2026-04-07 04:57:40.340592 | orchestrator | ok: [testbed-node-3] 2026-04-07 04:57:40.340602 | orchestrator | ok: [testbed-node-4] 2026-04-07 04:57:40.340613 | orchestrator | ok: [testbed-node-5] 2026-04-07 04:57:40.340624 | orchestrator | ok: [testbed-manager] 2026-04-07 04:57:40.340635 | orchestrator | 2026-04-07 04:57:40.340645 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-04-07 04:57:40.340656 | orchestrator | Tuesday 07 April 2026 04:57:32 +0000 (0:00:02.683) 0:04:44.758 ********* 2026-04-07 04:57:40.340667 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:40.340677 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:40.340688 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:40.340699 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:40.340709 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:40.340720 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:40.340731 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:40.340741 | orchestrator | 2026-04-07 04:57:40.340752 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-07 04:57:40.340763 | orchestrator | Tuesday 07 April 2026 04:57:35 +0000 (0:00:02.314) 0:04:47.073 ********* 2026-04-07 04:57:40.340774 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:40.340785 | orchestrator | skipping: [testbed-node-1] 2026-04-07 04:57:40.340795 | orchestrator | skipping: [testbed-node-2] 2026-04-07 04:57:40.340806 | orchestrator | skipping: [testbed-node-3] 2026-04-07 04:57:40.340816 | orchestrator | skipping: [testbed-node-4] 2026-04-07 04:57:40.340837 | orchestrator | skipping: [testbed-node-5] 2026-04-07 04:57:40.340848 | orchestrator | skipping: [testbed-manager] 2026-04-07 04:57:40.340858 | orchestrator | 2026-04-07 04:57:40.340869 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-04-07 04:57:40.340880 | orchestrator | Tuesday 07 April 2026 04:57:37 +0000 (0:00:02.430) 0:04:49.503 ********* 2026-04-07 04:57:40.340890 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:57:40.340901 | orchestrator | 2026-04-07 04:57:40.340912 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-04-07 04:57:40.340922 | orchestrator | Tuesday 07 April 2026 04:57:40 +0000 (0:00:02.634) 0:04:52.138 ********* 2026-04-07 04:57:40.340933 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:57:40.340944 | orchestrator | 2026-04-07 04:57:40.340965 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-04-07 04:58:20.067257 | orchestrator | 2026-04-07 04:58:20.067371 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 04:58:20.067403 | orchestrator | Tuesday 07 April 2026 04:57:41 +0000 (0:00:01.427) 0:04:53.565 ********* 2026-04-07 04:58:20.067415 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.067426 | orchestrator | 2026-04-07 04:58:20.067437 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 04:58:20.067447 | orchestrator | Tuesday 07 April 2026 04:57:43 +0000 (0:00:01.521) 0:04:55.087 ********* 2026-04-07 04:58:20.067457 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.067467 | orchestrator | 2026-04-07 04:58:20.067477 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-04-07 04:58:20.067487 | orchestrator | Tuesday 07 April 2026 04:57:44 +0000 (0:00:01.151) 0:04:56.239 ********* 2026-04-07 04:58:20.067500 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-07 04:58:20.067513 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-07 04:58:20.067524 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-07 04:58:20.067535 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-07 04:58:20.067548 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-07 04:58:20.067559 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}])  2026-04-07 04:58:20.067590 | orchestrator | 2026-04-07 04:58:20.067602 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-07 04:58:20.067611 | orchestrator | 2026-04-07 04:58:20.067621 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-07 04:58:20.067631 | orchestrator | Tuesday 07 April 2026 04:57:54 +0000 (0:00:10.596) 0:05:06.835 ********* 2026-04-07 04:58:20.067641 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.067651 | orchestrator | 2026-04-07 04:58:20.067660 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-07 04:58:20.067670 | orchestrator | Tuesday 07 April 2026 04:57:56 +0000 (0:00:01.470) 0:05:08.305 ********* 2026-04-07 04:58:20.067680 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.067689 | orchestrator | 2026-04-07 04:58:20.067699 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-07 04:58:20.067709 | orchestrator | Tuesday 07 April 2026 04:57:57 +0000 (0:00:01.143) 0:05:09.448 ********* 2026-04-07 04:58:20.067719 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:20.067730 | orchestrator | 2026-04-07 04:58:20.067739 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-07 04:58:20.067749 | orchestrator | Tuesday 07 April 2026 04:57:58 +0000 (0:00:01.103) 0:05:10.552 ********* 2026-04-07 04:58:20.067759 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.067769 | orchestrator | 2026-04-07 04:58:20.067781 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-07 04:58:20.067793 | orchestrator | Tuesday 07 April 2026 04:57:59 +0000 (0:00:01.219) 0:05:11.772 ********* 2026-04-07 04:58:20.067805 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-07 04:58:20.067816 | orchestrator | 2026-04-07 04:58:20.067827 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-07 04:58:20.067856 | orchestrator | Tuesday 07 April 2026 04:58:00 +0000 (0:00:01.193) 0:05:12.965 ********* 2026-04-07 04:58:20.067868 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.067880 | orchestrator | 2026-04-07 04:58:20.067896 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-07 04:58:20.067908 | orchestrator | Tuesday 07 April 2026 04:58:02 +0000 (0:00:01.461) 0:05:14.426 ********* 2026-04-07 04:58:20.067920 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.067931 | orchestrator | 2026-04-07 04:58:20.067942 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 04:58:20.067954 | orchestrator | Tuesday 07 April 2026 04:58:03 +0000 (0:00:01.157) 0:05:15.583 ********* 2026-04-07 04:58:20.067965 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.067976 | orchestrator | 2026-04-07 04:58:20.067987 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 04:58:20.067998 | orchestrator | Tuesday 07 April 2026 04:58:05 +0000 (0:00:01.419) 0:05:17.003 ********* 2026-04-07 04:58:20.068009 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.068020 | orchestrator | 2026-04-07 04:58:20.068032 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-07 04:58:20.068043 | orchestrator | Tuesday 07 April 2026 04:58:06 +0000 (0:00:01.144) 0:05:18.148 ********* 2026-04-07 04:58:20.068054 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.068065 | orchestrator | 2026-04-07 04:58:20.068077 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-07 04:58:20.068089 | orchestrator | Tuesday 07 April 2026 04:58:07 +0000 (0:00:01.157) 0:05:19.305 ********* 2026-04-07 04:58:20.068100 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.068112 | orchestrator | 2026-04-07 04:58:20.068124 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-07 04:58:20.068136 | orchestrator | Tuesday 07 April 2026 04:58:08 +0000 (0:00:01.173) 0:05:20.479 ********* 2026-04-07 04:58:20.068185 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:20.068195 | orchestrator | 2026-04-07 04:58:20.068205 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-07 04:58:20.068215 | orchestrator | Tuesday 07 April 2026 04:58:09 +0000 (0:00:01.149) 0:05:21.628 ********* 2026-04-07 04:58:20.068224 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.068234 | orchestrator | 2026-04-07 04:58:20.068244 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-07 04:58:20.068253 | orchestrator | Tuesday 07 April 2026 04:58:10 +0000 (0:00:01.142) 0:05:22.771 ********* 2026-04-07 04:58:20.068263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:58:20.068273 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:58:20.068283 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:58:20.068292 | orchestrator | 2026-04-07 04:58:20.068302 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-07 04:58:20.068312 | orchestrator | Tuesday 07 April 2026 04:58:12 +0000 (0:00:01.651) 0:05:24.422 ********* 2026-04-07 04:58:20.068322 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:20.068331 | orchestrator | 2026-04-07 04:58:20.068341 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-07 04:58:20.068351 | orchestrator | Tuesday 07 April 2026 04:58:13 +0000 (0:00:01.218) 0:05:25.640 ********* 2026-04-07 04:58:20.068360 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:58:20.068370 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:58:20.068380 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:58:20.068390 | orchestrator | 2026-04-07 04:58:20.068400 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-07 04:58:20.068409 | orchestrator | Tuesday 07 April 2026 04:58:16 +0000 (0:00:03.029) 0:05:28.669 ********* 2026-04-07 04:58:20.068419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 04:58:20.068429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 04:58:20.068438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 04:58:20.068448 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:20.068458 | orchestrator | 2026-04-07 04:58:20.068468 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-07 04:58:20.068478 | orchestrator | Tuesday 07 April 2026 04:58:18 +0000 (0:00:01.376) 0:05:30.046 ********* 2026-04-07 04:58:20.068490 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 04:58:20.068503 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 04:58:20.068513 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 04:58:20.068523 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:20.068532 | orchestrator | 2026-04-07 04:58:20.068542 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-07 04:58:20.068552 | orchestrator | Tuesday 07 April 2026 04:58:19 +0000 (0:00:01.924) 0:05:31.971 ********* 2026-04-07 04:58:20.068574 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:40.500894 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:40.501004 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:40.501019 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501032 | orchestrator | 2026-04-07 04:58:40.501044 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-07 04:58:40.501057 | orchestrator | Tuesday 07 April 2026 04:58:21 +0000 (0:00:01.138) 0:05:33.110 ********* 2026-04-07 04:58:40.501070 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4cd0634997ff', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-07 04:58:14.204867', 'end': '2026-04-07 04:58:14.251911', 'delta': '0:00:00.047044', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4cd0634997ff'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-07 04:58:40.501085 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e8d9f46c7c23', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-07 04:58:14.771819', 'end': '2026-04-07 04:58:14.827872', 'delta': '0:00:00.056053', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8d9f46c7c23'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-07 04:58:40.501096 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f4f6ca89ad43', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-07 04:58:15.528842', 'end': '2026-04-07 04:58:15.571834', 'delta': '0:00:00.042992', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f4f6ca89ad43'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-07 04:58:40.501107 | orchestrator | 2026-04-07 04:58:40.501119 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-07 04:58:40.501129 | orchestrator | Tuesday 07 April 2026 04:58:22 +0000 (0:00:01.226) 0:05:34.336 ********* 2026-04-07 04:58:40.501140 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:40.501151 | orchestrator | 2026-04-07 04:58:40.501212 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-07 04:58:40.501224 | orchestrator | Tuesday 07 April 2026 04:58:23 +0000 (0:00:01.567) 0:05:35.904 ********* 2026-04-07 04:58:40.501234 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501244 | orchestrator | 2026-04-07 04:58:40.501254 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-07 04:58:40.501263 | orchestrator | Tuesday 07 April 2026 04:58:25 +0000 (0:00:01.276) 0:05:37.180 ********* 2026-04-07 04:58:40.501273 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:40.501282 | orchestrator | 2026-04-07 04:58:40.501316 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-07 04:58:40.501326 | orchestrator | Tuesday 07 April 2026 04:58:26 +0000 (0:00:01.130) 0:05:38.311 ********* 2026-04-07 04:58:40.501353 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-07 04:58:40.501363 | orchestrator | 2026-04-07 04:58:40.501373 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 04:58:40.501383 | orchestrator | Tuesday 07 April 2026 04:58:28 +0000 (0:00:02.380) 0:05:40.692 ********* 2026-04-07 04:58:40.501392 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:58:40.501402 | orchestrator | 2026-04-07 04:58:40.501411 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-07 04:58:40.501421 | orchestrator | Tuesday 07 April 2026 04:58:29 +0000 (0:00:01.145) 0:05:41.837 ********* 2026-04-07 04:58:40.501431 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501440 | orchestrator | 2026-04-07 04:58:40.501450 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-07 04:58:40.501459 | orchestrator | Tuesday 07 April 2026 04:58:31 +0000 (0:00:01.155) 0:05:42.993 ********* 2026-04-07 04:58:40.501469 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501478 | orchestrator | 2026-04-07 04:58:40.501488 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 04:58:40.501498 | orchestrator | Tuesday 07 April 2026 04:58:32 +0000 (0:00:01.245) 0:05:44.239 ********* 2026-04-07 04:58:40.501507 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501517 | orchestrator | 2026-04-07 04:58:40.501527 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-07 04:58:40.501537 | orchestrator | Tuesday 07 April 2026 04:58:33 +0000 (0:00:01.140) 0:05:45.380 ********* 2026-04-07 04:58:40.501546 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501556 | orchestrator | 2026-04-07 04:58:40.501566 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-07 04:58:40.501575 | orchestrator | Tuesday 07 April 2026 04:58:34 +0000 (0:00:01.264) 0:05:46.644 ********* 2026-04-07 04:58:40.501585 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501594 | orchestrator | 2026-04-07 04:58:40.501604 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-07 04:58:40.501614 | orchestrator | Tuesday 07 April 2026 04:58:35 +0000 (0:00:01.156) 0:05:47.800 ********* 2026-04-07 04:58:40.501623 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501633 | orchestrator | 2026-04-07 04:58:40.501642 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-07 04:58:40.501652 | orchestrator | Tuesday 07 April 2026 04:58:37 +0000 (0:00:01.194) 0:05:48.995 ********* 2026-04-07 04:58:40.501661 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501671 | orchestrator | 2026-04-07 04:58:40.501680 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-07 04:58:40.501690 | orchestrator | Tuesday 07 April 2026 04:58:38 +0000 (0:00:01.148) 0:05:50.144 ********* 2026-04-07 04:58:40.501699 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501709 | orchestrator | 2026-04-07 04:58:40.501719 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-07 04:58:40.501729 | orchestrator | Tuesday 07 April 2026 04:58:39 +0000 (0:00:01.123) 0:05:51.267 ********* 2026-04-07 04:58:40.501738 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:40.501756 | orchestrator | 2026-04-07 04:58:40.501766 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-07 04:58:40.501776 | orchestrator | Tuesday 07 April 2026 04:58:40 +0000 (0:00:01.108) 0:05:52.376 ********* 2026-04-07 04:58:40.501786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:58:40.501796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:58:40.501806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:58:40.501823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 04:58:40.501842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:58:41.693277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:58:41.693359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:58:41.693374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cddfb89c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 04:58:41.693404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:58:41.693424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 04:58:41.693432 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:58:41.693444 | orchestrator | 2026-04-07 04:58:41.693456 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-07 04:58:41.693468 | orchestrator | Tuesday 07 April 2026 04:58:41 +0000 (0:00:01.187) 0:05:53.563 ********* 2026-04-07 04:58:41.693496 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:41.693509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:41.693527 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:41.693538 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:41.693549 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:41.693565 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:58:41.693584 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:59:07.019567 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cddfb89c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:59:07.019734 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:59:07.019782 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 04:59:07.019806 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:59:07.019826 | orchestrator | 2026-04-07 04:59:07.019844 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-07 04:59:07.019863 | orchestrator | Tuesday 07 April 2026 04:58:42 +0000 (0:00:01.205) 0:05:54.769 ********* 2026-04-07 04:59:07.019880 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:59:07.019898 | orchestrator | 2026-04-07 04:59:07.019916 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-07 04:59:07.019935 | orchestrator | Tuesday 07 April 2026 04:58:44 +0000 (0:00:01.509) 0:05:56.279 ********* 2026-04-07 04:59:07.019953 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:59:07.019969 | orchestrator | 2026-04-07 04:59:07.019980 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 04:59:07.020008 | orchestrator | Tuesday 07 April 2026 04:58:45 +0000 (0:00:01.133) 0:05:57.412 ********* 2026-04-07 04:59:07.020018 | orchestrator | ok: [testbed-node-0] 2026-04-07 04:59:07.020028 | orchestrator | 2026-04-07 04:59:07.020038 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 04:59:07.020058 | orchestrator | Tuesday 07 April 2026 04:58:46 +0000 (0:00:01.504) 0:05:58.917 ********* 2026-04-07 04:59:07.020068 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:59:07.020078 | orchestrator | 2026-04-07 04:59:07.020091 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 04:59:07.020103 | orchestrator | Tuesday 07 April 2026 04:58:48 +0000 (0:00:01.138) 0:06:00.056 ********* 2026-04-07 04:59:07.020115 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:59:07.020127 | orchestrator | 2026-04-07 04:59:07.020138 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 04:59:07.020154 | orchestrator | Tuesday 07 April 2026 04:58:49 +0000 (0:00:01.232) 0:06:01.289 ********* 2026-04-07 04:59:07.020171 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:59:07.020188 | orchestrator | 2026-04-07 04:59:07.020228 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 04:59:07.020245 | orchestrator | Tuesday 07 April 2026 04:58:50 +0000 (0:00:01.131) 0:06:02.420 ********* 2026-04-07 04:59:07.020263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:59:07.020279 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 04:59:07.020295 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 04:59:07.020310 | orchestrator | 2026-04-07 04:59:07.020326 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 04:59:07.020340 | orchestrator | Tuesday 07 April 2026 04:58:52 +0000 (0:00:02.116) 0:06:04.536 ********* 2026-04-07 04:59:07.020354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 04:59:07.020369 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 04:59:07.020384 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 04:59:07.020400 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:59:07.020416 | orchestrator | 2026-04-07 04:59:07.020432 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-07 04:59:07.020448 | orchestrator | Tuesday 07 April 2026 04:58:53 +0000 (0:00:01.165) 0:06:05.702 ********* 2026-04-07 04:59:07.020463 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:59:07.020479 | orchestrator | 2026-04-07 04:59:07.020495 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-07 04:59:07.020510 | orchestrator | Tuesday 07 April 2026 04:58:54 +0000 (0:00:01.198) 0:06:06.901 ********* 2026-04-07 04:59:07.020525 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:59:07.020541 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:59:07.020559 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:59:07.020576 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 04:59:07.020593 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 04:59:07.020609 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 04:59:07.020627 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 04:59:07.020644 | orchestrator | 2026-04-07 04:59:07.020662 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-07 04:59:07.020681 | orchestrator | Tuesday 07 April 2026 04:58:57 +0000 (0:00:02.326) 0:06:09.227 ********* 2026-04-07 04:59:07.020698 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 04:59:07.020716 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 04:59:07.020734 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 04:59:07.020752 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 04:59:07.020771 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 04:59:07.020804 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 04:59:07.020821 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 04:59:07.020838 | orchestrator | 2026-04-07 04:59:07.020866 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-07 04:59:07.020884 | orchestrator | Tuesday 07 April 2026 04:59:00 +0000 (0:00:03.073) 0:06:12.301 ********* 2026-04-07 04:59:07.020901 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-07 04:59:07.020919 | orchestrator | 2026-04-07 04:59:07.020937 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-07 04:59:07.020948 | orchestrator | Tuesday 07 April 2026 04:59:02 +0000 (0:00:02.189) 0:06:14.490 ********* 2026-04-07 04:59:07.020958 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:59:07.020968 | orchestrator | 2026-04-07 04:59:07.020978 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-07 04:59:07.020987 | orchestrator | Tuesday 07 April 2026 04:59:03 +0000 (0:00:01.188) 0:06:15.679 ********* 2026-04-07 04:59:07.020997 | orchestrator | skipping: [testbed-node-0] 2026-04-07 04:59:07.021007 | orchestrator | 2026-04-07 04:59:07.021017 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-07 04:59:07.021026 | orchestrator | Tuesday 07 April 2026 04:59:04 +0000 (0:00:01.104) 0:06:16.783 ********* 2026-04-07 04:59:07.021036 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-07 04:59:07.021046 | orchestrator | 2026-04-07 04:59:07.021055 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-07 04:59:07.021078 | orchestrator | Tuesday 07 April 2026 04:59:07 +0000 (0:00:02.204) 0:06:18.987 ********* 2026-04-07 05:00:08.980320 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.980407 | orchestrator | 2026-04-07 05:00:08.980424 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-07 05:00:08.980437 | orchestrator | Tuesday 07 April 2026 04:59:08 +0000 (0:00:01.137) 0:06:20.125 ********* 2026-04-07 05:00:08.980449 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:00:08.980460 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:00:08.980472 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:00:08.980483 | orchestrator | 2026-04-07 05:00:08.980494 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-07 05:00:08.980506 | orchestrator | Tuesday 07 April 2026 04:59:10 +0000 (0:00:02.498) 0:06:22.624 ********* 2026-04-07 05:00:08.980516 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-04-07 05:00:08.980528 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-04-07 05:00:08.980539 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-04-07 05:00:08.980550 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-04-07 05:00:08.980561 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-04-07 05:00:08.980573 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-04-07 05:00:08.980584 | orchestrator | 2026-04-07 05:00:08.980595 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-07 05:00:08.980606 | orchestrator | Tuesday 07 April 2026 04:59:24 +0000 (0:00:13.770) 0:06:36.394 ********* 2026-04-07 05:00:08.980617 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:00:08.980628 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:00:08.980639 | orchestrator | 2026-04-07 05:00:08.980650 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-07 05:00:08.980682 | orchestrator | Tuesday 07 April 2026 04:59:28 +0000 (0:00:03.980) 0:06:40.375 ********* 2026-04-07 05:00:08.980694 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:00:08.980705 | orchestrator | 2026-04-07 05:00:08.980716 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 05:00:08.980727 | orchestrator | Tuesday 07 April 2026 04:59:30 +0000 (0:00:02.605) 0:06:42.981 ********* 2026-04-07 05:00:08.980738 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-07 05:00:08.980749 | orchestrator | 2026-04-07 05:00:08.980760 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 05:00:08.980771 | orchestrator | Tuesday 07 April 2026 04:59:32 +0000 (0:00:01.464) 0:06:44.445 ********* 2026-04-07 05:00:08.980781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-07 05:00:08.980792 | orchestrator | 2026-04-07 05:00:08.980803 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 05:00:08.980814 | orchestrator | Tuesday 07 April 2026 04:59:34 +0000 (0:00:01.562) 0:06:46.008 ********* 2026-04-07 05:00:08.980825 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:08.980836 | orchestrator | 2026-04-07 05:00:08.980947 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 05:00:08.980974 | orchestrator | Tuesday 07 April 2026 04:59:35 +0000 (0:00:01.617) 0:06:47.626 ********* 2026-04-07 05:00:08.980994 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.981016 | orchestrator | 2026-04-07 05:00:08.981037 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 05:00:08.981056 | orchestrator | Tuesday 07 April 2026 04:59:36 +0000 (0:00:01.137) 0:06:48.763 ********* 2026-04-07 05:00:08.981080 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.981106 | orchestrator | 2026-04-07 05:00:08.981129 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 05:00:08.981150 | orchestrator | Tuesday 07 April 2026 04:59:37 +0000 (0:00:01.119) 0:06:49.883 ********* 2026-04-07 05:00:08.981170 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.981191 | orchestrator | 2026-04-07 05:00:08.981212 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 05:00:08.981246 | orchestrator | Tuesday 07 April 2026 04:59:39 +0000 (0:00:01.144) 0:06:51.027 ********* 2026-04-07 05:00:08.981318 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:08.981340 | orchestrator | 2026-04-07 05:00:08.981360 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 05:00:08.981380 | orchestrator | Tuesday 07 April 2026 04:59:40 +0000 (0:00:01.537) 0:06:52.565 ********* 2026-04-07 05:00:08.981399 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.981418 | orchestrator | 2026-04-07 05:00:08.981439 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 05:00:08.981460 | orchestrator | Tuesday 07 April 2026 04:59:41 +0000 (0:00:01.106) 0:06:53.672 ********* 2026-04-07 05:00:08.981480 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.981500 | orchestrator | 2026-04-07 05:00:08.981521 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 05:00:08.981542 | orchestrator | Tuesday 07 April 2026 04:59:42 +0000 (0:00:01.166) 0:06:54.838 ********* 2026-04-07 05:00:08.981563 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:08.981583 | orchestrator | 2026-04-07 05:00:08.981604 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 05:00:08.981624 | orchestrator | Tuesday 07 April 2026 04:59:44 +0000 (0:00:01.561) 0:06:56.399 ********* 2026-04-07 05:00:08.981644 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:08.981664 | orchestrator | 2026-04-07 05:00:08.981706 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 05:00:08.981727 | orchestrator | Tuesday 07 April 2026 04:59:45 +0000 (0:00:01.539) 0:06:57.939 ********* 2026-04-07 05:00:08.981749 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.981789 | orchestrator | 2026-04-07 05:00:08.981810 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 05:00:08.981830 | orchestrator | Tuesday 07 April 2026 04:59:47 +0000 (0:00:01.109) 0:06:59.048 ********* 2026-04-07 05:00:08.981851 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:08.981873 | orchestrator | 2026-04-07 05:00:08.981893 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 05:00:08.981915 | orchestrator | Tuesday 07 April 2026 04:59:48 +0000 (0:00:01.164) 0:07:00.213 ********* 2026-04-07 05:00:08.981935 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.981955 | orchestrator | 2026-04-07 05:00:08.981977 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 05:00:08.981998 | orchestrator | Tuesday 07 April 2026 04:59:49 +0000 (0:00:01.153) 0:07:01.367 ********* 2026-04-07 05:00:08.982069 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982095 | orchestrator | 2026-04-07 05:00:08.982116 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 05:00:08.982135 | orchestrator | Tuesday 07 April 2026 04:59:50 +0000 (0:00:01.125) 0:07:02.493 ********* 2026-04-07 05:00:08.982153 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982171 | orchestrator | 2026-04-07 05:00:08.982189 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 05:00:08.982207 | orchestrator | Tuesday 07 April 2026 04:59:51 +0000 (0:00:01.145) 0:07:03.639 ********* 2026-04-07 05:00:08.982226 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982244 | orchestrator | 2026-04-07 05:00:08.982262 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 05:00:08.982305 | orchestrator | Tuesday 07 April 2026 04:59:52 +0000 (0:00:01.133) 0:07:04.772 ********* 2026-04-07 05:00:08.982323 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982342 | orchestrator | 2026-04-07 05:00:08.982360 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 05:00:08.982379 | orchestrator | Tuesday 07 April 2026 04:59:53 +0000 (0:00:01.106) 0:07:05.878 ********* 2026-04-07 05:00:08.982404 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:08.982423 | orchestrator | 2026-04-07 05:00:08.982439 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 05:00:08.982450 | orchestrator | Tuesday 07 April 2026 04:59:55 +0000 (0:00:01.161) 0:07:07.039 ********* 2026-04-07 05:00:08.982461 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:08.982472 | orchestrator | 2026-04-07 05:00:08.982483 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 05:00:08.982494 | orchestrator | Tuesday 07 April 2026 04:59:56 +0000 (0:00:01.164) 0:07:08.204 ********* 2026-04-07 05:00:08.982505 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:08.982515 | orchestrator | 2026-04-07 05:00:08.982526 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-07 05:00:08.982537 | orchestrator | Tuesday 07 April 2026 04:59:57 +0000 (0:00:01.111) 0:07:09.315 ********* 2026-04-07 05:00:08.982548 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982559 | orchestrator | 2026-04-07 05:00:08.982570 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-07 05:00:08.982581 | orchestrator | Tuesday 07 April 2026 04:59:58 +0000 (0:00:01.119) 0:07:10.435 ********* 2026-04-07 05:00:08.982592 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982602 | orchestrator | 2026-04-07 05:00:08.982613 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-07 05:00:08.982624 | orchestrator | Tuesday 07 April 2026 04:59:59 +0000 (0:00:01.123) 0:07:11.559 ********* 2026-04-07 05:00:08.982635 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982646 | orchestrator | 2026-04-07 05:00:08.982657 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-07 05:00:08.982668 | orchestrator | Tuesday 07 April 2026 05:00:00 +0000 (0:00:01.179) 0:07:12.738 ********* 2026-04-07 05:00:08.982678 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982701 | orchestrator | 2026-04-07 05:00:08.982712 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-07 05:00:08.982722 | orchestrator | Tuesday 07 April 2026 05:00:01 +0000 (0:00:01.145) 0:07:13.884 ********* 2026-04-07 05:00:08.982733 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982744 | orchestrator | 2026-04-07 05:00:08.982756 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-07 05:00:08.982766 | orchestrator | Tuesday 07 April 2026 05:00:03 +0000 (0:00:01.167) 0:07:15.051 ********* 2026-04-07 05:00:08.982777 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982788 | orchestrator | 2026-04-07 05:00:08.982806 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-07 05:00:08.982817 | orchestrator | Tuesday 07 April 2026 05:00:04 +0000 (0:00:01.170) 0:07:16.222 ********* 2026-04-07 05:00:08.982828 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982839 | orchestrator | 2026-04-07 05:00:08.982850 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-07 05:00:08.982861 | orchestrator | Tuesday 07 April 2026 05:00:05 +0000 (0:00:01.150) 0:07:17.373 ********* 2026-04-07 05:00:08.982872 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982883 | orchestrator | 2026-04-07 05:00:08.982894 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-07 05:00:08.982904 | orchestrator | Tuesday 07 April 2026 05:00:06 +0000 (0:00:01.216) 0:07:18.590 ********* 2026-04-07 05:00:08.982915 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982926 | orchestrator | 2026-04-07 05:00:08.982937 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-07 05:00:08.982947 | orchestrator | Tuesday 07 April 2026 05:00:07 +0000 (0:00:01.199) 0:07:19.789 ********* 2026-04-07 05:00:08.982958 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:08.982969 | orchestrator | 2026-04-07 05:00:08.982980 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-07 05:00:08.982990 | orchestrator | Tuesday 07 April 2026 05:00:08 +0000 (0:00:01.164) 0:07:20.953 ********* 2026-04-07 05:00:59.588037 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.588145 | orchestrator | 2026-04-07 05:00:59.588163 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-07 05:00:59.588175 | orchestrator | Tuesday 07 April 2026 05:00:10 +0000 (0:00:01.085) 0:07:22.039 ********* 2026-04-07 05:00:59.588186 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.588196 | orchestrator | 2026-04-07 05:00:59.588206 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-07 05:00:59.588216 | orchestrator | Tuesday 07 April 2026 05:00:11 +0000 (0:00:01.085) 0:07:23.124 ********* 2026-04-07 05:00:59.588226 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:59.588236 | orchestrator | 2026-04-07 05:00:59.588246 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-07 05:00:59.588256 | orchestrator | Tuesday 07 April 2026 05:00:13 +0000 (0:00:01.999) 0:07:25.124 ********* 2026-04-07 05:00:59.588266 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:59.588276 | orchestrator | 2026-04-07 05:00:59.588286 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-07 05:00:59.588295 | orchestrator | Tuesday 07 April 2026 05:00:15 +0000 (0:00:02.671) 0:07:27.795 ********* 2026-04-07 05:00:59.588305 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-07 05:00:59.588398 | orchestrator | 2026-04-07 05:00:59.588413 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-07 05:00:59.588423 | orchestrator | Tuesday 07 April 2026 05:00:17 +0000 (0:00:01.485) 0:07:29.282 ********* 2026-04-07 05:00:59.588433 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.588442 | orchestrator | 2026-04-07 05:00:59.588453 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-07 05:00:59.588462 | orchestrator | Tuesday 07 April 2026 05:00:18 +0000 (0:00:01.159) 0:07:30.441 ********* 2026-04-07 05:00:59.588495 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.588506 | orchestrator | 2026-04-07 05:00:59.588515 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-07 05:00:59.588525 | orchestrator | Tuesday 07 April 2026 05:00:19 +0000 (0:00:01.139) 0:07:31.580 ********* 2026-04-07 05:00:59.588535 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 05:00:59.588545 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 05:00:59.588557 | orchestrator | 2026-04-07 05:00:59.588568 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-07 05:00:59.588585 | orchestrator | Tuesday 07 April 2026 05:00:21 +0000 (0:00:01.847) 0:07:33.430 ********* 2026-04-07 05:00:59.588603 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:59.588620 | orchestrator | 2026-04-07 05:00:59.588636 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-07 05:00:59.588653 | orchestrator | Tuesday 07 April 2026 05:00:23 +0000 (0:00:01.671) 0:07:35.101 ********* 2026-04-07 05:00:59.588672 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.588690 | orchestrator | 2026-04-07 05:00:59.588705 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-07 05:00:59.588715 | orchestrator | Tuesday 07 April 2026 05:00:24 +0000 (0:00:01.168) 0:07:36.270 ********* 2026-04-07 05:00:59.588725 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.588735 | orchestrator | 2026-04-07 05:00:59.588744 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-07 05:00:59.588754 | orchestrator | Tuesday 07 April 2026 05:00:25 +0000 (0:00:01.118) 0:07:37.388 ********* 2026-04-07 05:00:59.588764 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.588773 | orchestrator | 2026-04-07 05:00:59.588783 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-07 05:00:59.588793 | orchestrator | Tuesday 07 April 2026 05:00:26 +0000 (0:00:01.154) 0:07:38.543 ********* 2026-04-07 05:00:59.588803 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-07 05:00:59.588812 | orchestrator | 2026-04-07 05:00:59.588822 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-07 05:00:59.588831 | orchestrator | Tuesday 07 April 2026 05:00:28 +0000 (0:00:01.452) 0:07:39.996 ********* 2026-04-07 05:00:59.588841 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:59.588857 | orchestrator | 2026-04-07 05:00:59.588889 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-07 05:00:59.588907 | orchestrator | Tuesday 07 April 2026 05:00:29 +0000 (0:00:01.822) 0:07:41.819 ********* 2026-04-07 05:00:59.588953 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 05:00:59.588971 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 05:00:59.588988 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 05:00:59.589006 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589023 | orchestrator | 2026-04-07 05:00:59.589038 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-07 05:00:59.589048 | orchestrator | Tuesday 07 April 2026 05:00:30 +0000 (0:00:01.156) 0:07:42.975 ********* 2026-04-07 05:00:59.589057 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589067 | orchestrator | 2026-04-07 05:00:59.589077 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-07 05:00:59.589086 | orchestrator | Tuesday 07 April 2026 05:00:32 +0000 (0:00:01.154) 0:07:44.129 ********* 2026-04-07 05:00:59.589096 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589105 | orchestrator | 2026-04-07 05:00:59.589115 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-07 05:00:59.589125 | orchestrator | Tuesday 07 April 2026 05:00:33 +0000 (0:00:01.262) 0:07:45.392 ********* 2026-04-07 05:00:59.589144 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589154 | orchestrator | 2026-04-07 05:00:59.589164 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-07 05:00:59.589192 | orchestrator | Tuesday 07 April 2026 05:00:34 +0000 (0:00:01.205) 0:07:46.597 ********* 2026-04-07 05:00:59.589202 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589212 | orchestrator | 2026-04-07 05:00:59.589222 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-07 05:00:59.589232 | orchestrator | Tuesday 07 April 2026 05:00:35 +0000 (0:00:01.198) 0:07:47.796 ********* 2026-04-07 05:00:59.589241 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589251 | orchestrator | 2026-04-07 05:00:59.589260 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-07 05:00:59.589270 | orchestrator | Tuesday 07 April 2026 05:00:37 +0000 (0:00:01.188) 0:07:48.985 ********* 2026-04-07 05:00:59.589279 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:59.589289 | orchestrator | 2026-04-07 05:00:59.589299 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-07 05:00:59.589309 | orchestrator | Tuesday 07 April 2026 05:00:39 +0000 (0:00:02.629) 0:07:51.614 ********* 2026-04-07 05:00:59.589341 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:59.589351 | orchestrator | 2026-04-07 05:00:59.589361 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-07 05:00:59.589371 | orchestrator | Tuesday 07 April 2026 05:00:40 +0000 (0:00:01.241) 0:07:52.857 ********* 2026-04-07 05:00:59.589381 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-07 05:00:59.589391 | orchestrator | 2026-04-07 05:00:59.589400 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-07 05:00:59.589410 | orchestrator | Tuesday 07 April 2026 05:00:42 +0000 (0:00:01.511) 0:07:54.369 ********* 2026-04-07 05:00:59.589420 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589429 | orchestrator | 2026-04-07 05:00:59.589439 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-07 05:00:59.589449 | orchestrator | Tuesday 07 April 2026 05:00:43 +0000 (0:00:01.118) 0:07:55.487 ********* 2026-04-07 05:00:59.589458 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589468 | orchestrator | 2026-04-07 05:00:59.589478 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-07 05:00:59.589488 | orchestrator | Tuesday 07 April 2026 05:00:44 +0000 (0:00:01.120) 0:07:56.607 ********* 2026-04-07 05:00:59.589497 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589507 | orchestrator | 2026-04-07 05:00:59.589517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-07 05:00:59.589526 | orchestrator | Tuesday 07 April 2026 05:00:45 +0000 (0:00:01.123) 0:07:57.731 ********* 2026-04-07 05:00:59.589536 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589545 | orchestrator | 2026-04-07 05:00:59.589555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-07 05:00:59.589565 | orchestrator | Tuesday 07 April 2026 05:00:46 +0000 (0:00:01.129) 0:07:58.861 ********* 2026-04-07 05:00:59.589574 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589584 | orchestrator | 2026-04-07 05:00:59.589594 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-07 05:00:59.589603 | orchestrator | Tuesday 07 April 2026 05:00:48 +0000 (0:00:01.137) 0:07:59.999 ********* 2026-04-07 05:00:59.589613 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589623 | orchestrator | 2026-04-07 05:00:59.589632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-07 05:00:59.589642 | orchestrator | Tuesday 07 April 2026 05:00:49 +0000 (0:00:01.139) 0:08:01.139 ********* 2026-04-07 05:00:59.589652 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589661 | orchestrator | 2026-04-07 05:00:59.589671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-07 05:00:59.589680 | orchestrator | Tuesday 07 April 2026 05:00:50 +0000 (0:00:01.143) 0:08:02.283 ********* 2026-04-07 05:00:59.589696 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:00:59.589706 | orchestrator | 2026-04-07 05:00:59.589716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-07 05:00:59.589726 | orchestrator | Tuesday 07 April 2026 05:00:51 +0000 (0:00:01.133) 0:08:03.416 ********* 2026-04-07 05:00:59.589735 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:00:59.589745 | orchestrator | 2026-04-07 05:00:59.589755 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-07 05:00:59.589764 | orchestrator | Tuesday 07 April 2026 05:00:52 +0000 (0:00:01.224) 0:08:04.641 ********* 2026-04-07 05:00:59.589774 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-07 05:00:59.589784 | orchestrator | 2026-04-07 05:00:59.589794 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-07 05:00:59.589809 | orchestrator | Tuesday 07 April 2026 05:00:54 +0000 (0:00:01.464) 0:08:06.105 ********* 2026-04-07 05:00:59.589819 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-07 05:00:59.589829 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-07 05:00:59.589839 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-07 05:00:59.589849 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-07 05:00:59.589859 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-07 05:00:59.589868 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-07 05:00:59.589878 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-07 05:00:59.589888 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-07 05:00:59.589897 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 05:00:59.589907 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 05:00:59.589917 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 05:00:59.589926 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 05:00:59.589936 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 05:00:59.589946 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 05:00:59.589961 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-07 05:01:47.519711 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-07 05:01:47.519827 | orchestrator | 2026-04-07 05:01:47.519845 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-07 05:01:47.519859 | orchestrator | Tuesday 07 April 2026 05:01:01 +0000 (0:00:06.955) 0:08:13.060 ********* 2026-04-07 05:01:47.519871 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.519884 | orchestrator | 2026-04-07 05:01:47.519895 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-07 05:01:47.519906 | orchestrator | Tuesday 07 April 2026 05:01:02 +0000 (0:00:01.186) 0:08:14.247 ********* 2026-04-07 05:01:47.519917 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.519928 | orchestrator | 2026-04-07 05:01:47.519939 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-07 05:01:47.519949 | orchestrator | Tuesday 07 April 2026 05:01:03 +0000 (0:00:01.124) 0:08:15.372 ********* 2026-04-07 05:01:47.519961 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.519971 | orchestrator | 2026-04-07 05:01:47.519982 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-07 05:01:47.519993 | orchestrator | Tuesday 07 April 2026 05:01:04 +0000 (0:00:01.149) 0:08:16.521 ********* 2026-04-07 05:01:47.520004 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520014 | orchestrator | 2026-04-07 05:01:47.520025 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-07 05:01:47.520037 | orchestrator | Tuesday 07 April 2026 05:01:05 +0000 (0:00:01.172) 0:08:17.694 ********* 2026-04-07 05:01:47.520072 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520084 | orchestrator | 2026-04-07 05:01:47.520095 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-07 05:01:47.520106 | orchestrator | Tuesday 07 April 2026 05:01:06 +0000 (0:00:01.107) 0:08:18.801 ********* 2026-04-07 05:01:47.520116 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520127 | orchestrator | 2026-04-07 05:01:47.520138 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-07 05:01:47.520150 | orchestrator | Tuesday 07 April 2026 05:01:07 +0000 (0:00:01.119) 0:08:19.921 ********* 2026-04-07 05:01:47.520160 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520171 | orchestrator | 2026-04-07 05:01:47.520182 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-07 05:01:47.520193 | orchestrator | Tuesday 07 April 2026 05:01:09 +0000 (0:00:01.121) 0:08:21.043 ********* 2026-04-07 05:01:47.520204 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520215 | orchestrator | 2026-04-07 05:01:47.520225 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-07 05:01:47.520236 | orchestrator | Tuesday 07 April 2026 05:01:10 +0000 (0:00:01.126) 0:08:22.170 ********* 2026-04-07 05:01:47.520247 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520258 | orchestrator | 2026-04-07 05:01:47.520269 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-07 05:01:47.520279 | orchestrator | Tuesday 07 April 2026 05:01:11 +0000 (0:00:01.108) 0:08:23.279 ********* 2026-04-07 05:01:47.520290 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520301 | orchestrator | 2026-04-07 05:01:47.520312 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-07 05:01:47.520323 | orchestrator | Tuesday 07 April 2026 05:01:12 +0000 (0:00:01.117) 0:08:24.396 ********* 2026-04-07 05:01:47.520333 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520344 | orchestrator | 2026-04-07 05:01:47.520387 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-07 05:01:47.520408 | orchestrator | Tuesday 07 April 2026 05:01:13 +0000 (0:00:01.104) 0:08:25.501 ********* 2026-04-07 05:01:47.520426 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520449 | orchestrator | 2026-04-07 05:01:47.520476 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-07 05:01:47.520493 | orchestrator | Tuesday 07 April 2026 05:01:14 +0000 (0:00:01.100) 0:08:26.602 ********* 2026-04-07 05:01:47.520510 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520528 | orchestrator | 2026-04-07 05:01:47.520545 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-07 05:01:47.520563 | orchestrator | Tuesday 07 April 2026 05:01:15 +0000 (0:00:01.229) 0:08:27.831 ********* 2026-04-07 05:01:47.520579 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520596 | orchestrator | 2026-04-07 05:01:47.520614 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-07 05:01:47.520632 | orchestrator | Tuesday 07 April 2026 05:01:16 +0000 (0:00:01.136) 0:08:28.968 ********* 2026-04-07 05:01:47.520668 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520689 | orchestrator | 2026-04-07 05:01:47.520703 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-07 05:01:47.520714 | orchestrator | Tuesday 07 April 2026 05:01:18 +0000 (0:00:01.212) 0:08:30.181 ********* 2026-04-07 05:01:47.520725 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520736 | orchestrator | 2026-04-07 05:01:47.520747 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-07 05:01:47.520758 | orchestrator | Tuesday 07 April 2026 05:01:19 +0000 (0:00:01.111) 0:08:31.292 ********* 2026-04-07 05:01:47.520769 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520780 | orchestrator | 2026-04-07 05:01:47.520791 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 05:01:47.520816 | orchestrator | Tuesday 07 April 2026 05:01:20 +0000 (0:00:01.100) 0:08:32.393 ********* 2026-04-07 05:01:47.520827 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520838 | orchestrator | 2026-04-07 05:01:47.520849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 05:01:47.520860 | orchestrator | Tuesday 07 April 2026 05:01:21 +0000 (0:00:01.112) 0:08:33.505 ********* 2026-04-07 05:01:47.520871 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520881 | orchestrator | 2026-04-07 05:01:47.520913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 05:01:47.520925 | orchestrator | Tuesday 07 April 2026 05:01:22 +0000 (0:00:01.173) 0:08:34.679 ********* 2026-04-07 05:01:47.520936 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520947 | orchestrator | 2026-04-07 05:01:47.520958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 05:01:47.520968 | orchestrator | Tuesday 07 April 2026 05:01:23 +0000 (0:00:01.146) 0:08:35.826 ********* 2026-04-07 05:01:47.520979 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.520990 | orchestrator | 2026-04-07 05:01:47.521000 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 05:01:47.521011 | orchestrator | Tuesday 07 April 2026 05:01:25 +0000 (0:00:01.185) 0:08:37.012 ********* 2026-04-07 05:01:47.521022 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 05:01:47.521033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 05:01:47.521043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 05:01:47.521054 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.521065 | orchestrator | 2026-04-07 05:01:47.521076 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 05:01:47.521086 | orchestrator | Tuesday 07 April 2026 05:01:27 +0000 (0:00:02.046) 0:08:39.059 ********* 2026-04-07 05:01:47.521097 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 05:01:47.521108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 05:01:47.521119 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 05:01:47.521129 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.521140 | orchestrator | 2026-04-07 05:01:47.521151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 05:01:47.521161 | orchestrator | Tuesday 07 April 2026 05:01:28 +0000 (0:00:01.445) 0:08:40.504 ********* 2026-04-07 05:01:47.521172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 05:01:47.521183 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 05:01:47.521194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 05:01:47.521205 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.521215 | orchestrator | 2026-04-07 05:01:47.521226 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 05:01:47.521237 | orchestrator | Tuesday 07 April 2026 05:01:29 +0000 (0:00:01.398) 0:08:41.902 ********* 2026-04-07 05:01:47.521248 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.521258 | orchestrator | 2026-04-07 05:01:47.521269 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 05:01:47.521280 | orchestrator | Tuesday 07 April 2026 05:01:31 +0000 (0:00:01.130) 0:08:43.033 ********* 2026-04-07 05:01:47.521291 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-07 05:01:47.521302 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.521313 | orchestrator | 2026-04-07 05:01:47.521324 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-07 05:01:47.521334 | orchestrator | Tuesday 07 April 2026 05:01:32 +0000 (0:00:01.379) 0:08:44.412 ********* 2026-04-07 05:01:47.521345 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:01:47.521377 | orchestrator | 2026-04-07 05:01:47.521388 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-07 05:01:47.521407 | orchestrator | Tuesday 07 April 2026 05:01:34 +0000 (0:00:01.714) 0:08:46.126 ********* 2026-04-07 05:01:47.521418 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:01:47.521429 | orchestrator | 2026-04-07 05:01:47.521440 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-07 05:01:47.521451 | orchestrator | Tuesday 07 April 2026 05:01:35 +0000 (0:00:01.127) 0:08:47.253 ********* 2026-04-07 05:01:47.521462 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-04-07 05:01:47.521474 | orchestrator | 2026-04-07 05:01:47.521485 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-07 05:01:47.521496 | orchestrator | Tuesday 07 April 2026 05:01:36 +0000 (0:00:01.492) 0:08:48.746 ********* 2026-04-07 05:01:47.521506 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-07 05:01:47.521517 | orchestrator | 2026-04-07 05:01:47.521528 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-07 05:01:47.521539 | orchestrator | Tuesday 07 April 2026 05:01:39 +0000 (0:00:03.060) 0:08:51.806 ********* 2026-04-07 05:01:47.521550 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:01:47.521561 | orchestrator | 2026-04-07 05:01:47.521572 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-07 05:01:47.521589 | orchestrator | Tuesday 07 April 2026 05:01:41 +0000 (0:00:01.181) 0:08:52.987 ********* 2026-04-07 05:01:47.521600 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:01:47.521611 | orchestrator | 2026-04-07 05:01:47.521622 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-07 05:01:47.521633 | orchestrator | Tuesday 07 April 2026 05:01:42 +0000 (0:00:01.154) 0:08:54.142 ********* 2026-04-07 05:01:47.521644 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:01:47.521655 | orchestrator | 2026-04-07 05:01:47.521666 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-07 05:01:47.521677 | orchestrator | Tuesday 07 April 2026 05:01:43 +0000 (0:00:01.155) 0:08:55.298 ********* 2026-04-07 05:01:47.521688 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:01:47.521699 | orchestrator | 2026-04-07 05:01:47.521709 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-07 05:01:47.521720 | orchestrator | Tuesday 07 April 2026 05:01:45 +0000 (0:00:02.060) 0:08:57.359 ********* 2026-04-07 05:01:47.521731 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:01:47.521742 | orchestrator | 2026-04-07 05:01:47.521753 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-07 05:01:47.521764 | orchestrator | Tuesday 07 April 2026 05:01:47 +0000 (0:00:01.627) 0:08:58.986 ********* 2026-04-07 05:01:47.521775 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:01:47.521786 | orchestrator | 2026-04-07 05:01:47.521803 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-07 05:02:46.636001 | orchestrator | Tuesday 07 April 2026 05:01:48 +0000 (0:00:01.505) 0:09:00.492 ********* 2026-04-07 05:02:46.636125 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636141 | orchestrator | 2026-04-07 05:02:46.636153 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-07 05:02:46.636164 | orchestrator | Tuesday 07 April 2026 05:01:50 +0000 (0:00:01.513) 0:09:02.005 ********* 2026-04-07 05:02:46.636174 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636184 | orchestrator | 2026-04-07 05:02:46.636194 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-07 05:02:46.636204 | orchestrator | Tuesday 07 April 2026 05:01:51 +0000 (0:00:01.720) 0:09:03.726 ********* 2026-04-07 05:02:46.636213 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636223 | orchestrator | 2026-04-07 05:02:46.636233 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-07 05:02:46.636243 | orchestrator | Tuesday 07 April 2026 05:01:53 +0000 (0:00:01.730) 0:09:05.456 ********* 2026-04-07 05:02:46.636253 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-07 05:02:46.636264 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 05:02:46.636296 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 05:02:46.636307 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-07 05:02:46.636316 | orchestrator | 2026-04-07 05:02:46.636326 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-07 05:02:46.636336 | orchestrator | Tuesday 07 April 2026 05:01:57 +0000 (0:00:03.859) 0:09:09.316 ********* 2026-04-07 05:02:46.636345 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:02:46.636355 | orchestrator | 2026-04-07 05:02:46.636365 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-07 05:02:46.636375 | orchestrator | Tuesday 07 April 2026 05:01:59 +0000 (0:00:02.066) 0:09:11.383 ********* 2026-04-07 05:02:46.636384 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636394 | orchestrator | 2026-04-07 05:02:46.636432 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-07 05:02:46.636442 | orchestrator | Tuesday 07 April 2026 05:02:00 +0000 (0:00:01.088) 0:09:12.471 ********* 2026-04-07 05:02:46.636451 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636461 | orchestrator | 2026-04-07 05:02:46.636471 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-07 05:02:46.636484 | orchestrator | Tuesday 07 April 2026 05:02:01 +0000 (0:00:01.091) 0:09:13.563 ********* 2026-04-07 05:02:46.636495 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636507 | orchestrator | 2026-04-07 05:02:46.636518 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-07 05:02:46.636529 | orchestrator | Tuesday 07 April 2026 05:02:03 +0000 (0:00:02.170) 0:09:15.733 ********* 2026-04-07 05:02:46.636541 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636552 | orchestrator | 2026-04-07 05:02:46.636563 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-07 05:02:46.636575 | orchestrator | Tuesday 07 April 2026 05:02:05 +0000 (0:00:01.568) 0:09:17.302 ********* 2026-04-07 05:02:46.636586 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:02:46.636599 | orchestrator | 2026-04-07 05:02:46.636611 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-07 05:02:46.636620 | orchestrator | Tuesday 07 April 2026 05:02:06 +0000 (0:00:01.123) 0:09:18.426 ********* 2026-04-07 05:02:46.636630 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-04-07 05:02:46.636640 | orchestrator | 2026-04-07 05:02:46.636650 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-07 05:02:46.636659 | orchestrator | Tuesday 07 April 2026 05:02:07 +0000 (0:00:01.517) 0:09:19.943 ********* 2026-04-07 05:02:46.636669 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:02:46.636679 | orchestrator | 2026-04-07 05:02:46.636688 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-07 05:02:46.636698 | orchestrator | Tuesday 07 April 2026 05:02:09 +0000 (0:00:01.144) 0:09:21.088 ********* 2026-04-07 05:02:46.636708 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:02:46.636717 | orchestrator | 2026-04-07 05:02:46.636727 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-07 05:02:46.636737 | orchestrator | Tuesday 07 April 2026 05:02:10 +0000 (0:00:01.097) 0:09:22.186 ********* 2026-04-07 05:02:46.636746 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-04-07 05:02:46.636756 | orchestrator | 2026-04-07 05:02:46.636765 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-07 05:02:46.636789 | orchestrator | Tuesday 07 April 2026 05:02:11 +0000 (0:00:01.444) 0:09:23.630 ********* 2026-04-07 05:02:46.636799 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:02:46.636809 | orchestrator | 2026-04-07 05:02:46.636818 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-07 05:02:46.636828 | orchestrator | Tuesday 07 April 2026 05:02:14 +0000 (0:00:02.446) 0:09:26.077 ********* 2026-04-07 05:02:46.636837 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636855 | orchestrator | 2026-04-07 05:02:46.636865 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-07 05:02:46.636875 | orchestrator | Tuesday 07 April 2026 05:02:16 +0000 (0:00:02.009) 0:09:28.087 ********* 2026-04-07 05:02:46.636884 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.636894 | orchestrator | 2026-04-07 05:02:46.636904 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-07 05:02:46.636913 | orchestrator | Tuesday 07 April 2026 05:02:18 +0000 (0:00:02.412) 0:09:30.499 ********* 2026-04-07 05:02:46.636923 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:02:46.636933 | orchestrator | 2026-04-07 05:02:46.636942 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-07 05:02:46.636952 | orchestrator | Tuesday 07 April 2026 05:02:21 +0000 (0:00:03.390) 0:09:33.890 ********* 2026-04-07 05:02:46.636962 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-04-07 05:02:46.636972 | orchestrator | 2026-04-07 05:02:46.636999 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-07 05:02:46.637009 | orchestrator | Tuesday 07 April 2026 05:02:23 +0000 (0:00:01.588) 0:09:35.479 ********* 2026-04-07 05:02:46.637019 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.637028 | orchestrator | 2026-04-07 05:02:46.637038 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-07 05:02:46.637048 | orchestrator | Tuesday 07 April 2026 05:02:25 +0000 (0:00:02.362) 0:09:37.841 ********* 2026-04-07 05:02:46.637058 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:02:46.637068 | orchestrator | 2026-04-07 05:02:46.637078 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-07 05:02:46.637087 | orchestrator | Tuesday 07 April 2026 05:02:28 +0000 (0:00:03.127) 0:09:40.969 ********* 2026-04-07 05:02:46.637097 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:02:46.637107 | orchestrator | 2026-04-07 05:02:46.637117 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-07 05:02:46.637126 | orchestrator | Tuesday 07 April 2026 05:02:30 +0000 (0:00:01.128) 0:09:42.097 ********* 2026-04-07 05:02:46.637139 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-07 05:02:46.637151 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-07 05:02:46.637161 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-07 05:02:46.637171 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-07 05:02:46.637183 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-07 05:02:46.637205 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d5ad443502af4f9e4c20c79c27888f92613db0a1'}])  2026-04-07 05:02:46.637217 | orchestrator | 2026-04-07 05:02:46.637227 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-07 05:02:46.637242 | orchestrator | Tuesday 07 April 2026 05:02:40 +0000 (0:00:10.369) 0:09:52.466 ********* 2026-04-07 05:02:46.637252 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:02:46.637262 | orchestrator | 2026-04-07 05:02:46.637271 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 05:02:46.637281 | orchestrator | Tuesday 07 April 2026 05:02:42 +0000 (0:00:02.495) 0:09:54.962 ********* 2026-04-07 05:02:46.637291 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:02:46.637301 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 05:02:46.637311 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 05:02:46.637320 | orchestrator | 2026-04-07 05:02:46.637330 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 05:02:46.637340 | orchestrator | Tuesday 07 April 2026 05:02:45 +0000 (0:00:02.289) 0:09:57.251 ********* 2026-04-07 05:02:46.637349 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 05:02:46.637359 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 05:02:46.637369 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 05:02:46.637379 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:02:46.637388 | orchestrator | 2026-04-07 05:02:46.637412 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-07 05:02:46.637429 | orchestrator | Tuesday 07 April 2026 05:02:46 +0000 (0:00:01.355) 0:09:58.606 ********* 2026-04-07 05:34:08.100469 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:34:08.100756 | orchestrator | 2026-04-07 05:34:08.100792 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-07 05:34:08.100813 | orchestrator | Tuesday 07 April 2026 05:02:47 +0000 (0:00:01.102) 0:09:59.709 ********* 2026-04-07 05:34:08.100836 | orchestrator | 2026-04-07 05:34:08.100855 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.100878 | orchestrator | 2026-04-07 05:34:08.100899 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.100922 | orchestrator | 2026-04-07 05:34:08.100936 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.100950 | orchestrator | 2026-04-07 05:34:08.100962 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.100975 | orchestrator | 2026-04-07 05:34:08.100988 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101000 | orchestrator | 2026-04-07 05:34:08.101013 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101026 | orchestrator | 2026-04-07 05:34:08.101039 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101053 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-04-07 05:34:08.101067 | orchestrator | 2026-04-07 05:34:08.101080 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101119 | orchestrator | 2026-04-07 05:34:08.101130 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101141 | orchestrator | 2026-04-07 05:34:08.101152 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101163 | orchestrator | 2026-04-07 05:34:08.101174 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101185 | orchestrator | 2026-04-07 05:34:08.101195 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101206 | orchestrator | 2026-04-07 05:34:08.101217 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101228 | orchestrator | 2026-04-07 05:34:08.101239 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101249 | orchestrator | 2026-04-07 05:34:08.101260 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101271 | orchestrator | 2026-04-07 05:34:08.101282 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101292 | orchestrator | 2026-04-07 05:34:08.101303 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101314 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-04-07 05:34:08.101325 | orchestrator | 2026-04-07 05:34:08.101336 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101347 | orchestrator | 2026-04-07 05:34:08.101357 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101368 | orchestrator | 2026-04-07 05:34:08.101379 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101389 | orchestrator | 2026-04-07 05:34:08.101400 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101411 | orchestrator | 2026-04-07 05:34:08.101422 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101433 | orchestrator | 2026-04-07 05:34:08.101443 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101454 | orchestrator | 2026-04-07 05:34:08.101465 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101475 | orchestrator | 2026-04-07 05:34:08.101501 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101512 | orchestrator | 2026-04-07 05:34:08.101523 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101534 | orchestrator | 2026-04-07 05:34:08.101545 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101556 | orchestrator | 2026-04-07 05:34:08.101566 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101577 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-04-07 05:34:08.101588 | orchestrator | 2026-04-07 05:34:08.101599 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101610 | orchestrator | 2026-04-07 05:34:08.101620 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101631 | orchestrator | 2026-04-07 05:34:08.101642 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101660 | orchestrator | 2026-04-07 05:34:08.101693 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101705 | orchestrator | 2026-04-07 05:34:08.101756 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101767 | orchestrator | 2026-04-07 05:34:08.101778 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101788 | orchestrator | 2026-04-07 05:34:08.101799 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101810 | orchestrator | 2026-04-07 05:34:08.101827 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101846 | orchestrator | 2026-04-07 05:34:08.101869 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101897 | orchestrator | 2026-04-07 05:34:08.101915 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101932 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-04-07 05:34:08.101950 | orchestrator | 2026-04-07 05:34:08.101966 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.101984 | orchestrator | 2026-04-07 05:34:08.102002 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102098 | orchestrator | 2026-04-07 05:34:08.102120 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102140 | orchestrator | 2026-04-07 05:34:08.102158 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102171 | orchestrator | 2026-04-07 05:34:08.102182 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102192 | orchestrator | 2026-04-07 05:34:08.102203 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102214 | orchestrator | 2026-04-07 05:34:08.102225 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102235 | orchestrator | 2026-04-07 05:34:08.102246 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102257 | orchestrator | 2026-04-07 05:34:08.102267 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102278 | orchestrator | 2026-04-07 05:34:08.102289 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102299 | orchestrator | 2026-04-07 05:34:08.102310 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102321 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-04-07 05:34:08.102331 | orchestrator | 2026-04-07 05:34:08.102342 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102353 | orchestrator | 2026-04-07 05:34:08.102364 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102374 | orchestrator | 2026-04-07 05:34:08.102385 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102396 | orchestrator | 2026-04-07 05:34:08.102406 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102428 | orchestrator | 2026-04-07 05:34:08.102439 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102450 | orchestrator | 2026-04-07 05:34:08.102460 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102471 | orchestrator | 2026-04-07 05:34:08.102482 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102492 | orchestrator | 2026-04-07 05:34:08.102503 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102514 | orchestrator | 2026-04-07 05:34:08.102532 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102543 | orchestrator | 2026-04-07 05:34:08.102554 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 05:34:08.102592 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.10", "quorum_status", "--format", "json"], "delta": "0:05:00.244329", "end": "2026-04-07 05:34:06.608195", "msg": "non-zero return code", "rc": 1, "start": "2026-04-07 05:29:06.363866", "stderr": "2026-04-07T05:34:06.590+0000 7d1756010640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-04-07T05:34:06.590+0000 7d1756010640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-04-07 05:34:08.102609 | orchestrator | 2026-04-07 05:34:08.102620 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-04-07 05:34:08.102645 | orchestrator | Tuesday 07 April 2026 05:34:08 +0000 (0:31:20.363) 0:41:20.072 ********* 2026-04-07 05:34:15.196501 | orchestrator | 2026-04-07 05:34:15 | INFO  | Prepare task for execution of ceph-rolling_update. 2026-04-07 05:34:15.201247 | orchestrator | 2026-04-07 05:34:15 | INFO  | Task f9d5a8bd-d2ea-466e-9155-8dc735f13fd4 (ceph-rolling_update) was prepared for execution. 2026-04-07 05:34:15.201435 | orchestrator | 2026-04-07 05:34:15 | INFO  | It takes a moment until task f9d5a8bd-d2ea-466e-9155-8dc735f13fd4 (ceph-rolling_update) has been started and output is visible here. 2026-04-07 05:35:03.801816 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:03.801972 | orchestrator | 2026-04-07 05:35:03.801991 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-04-07 05:35:03.802005 | orchestrator | Tuesday 07 April 2026 05:34:09 +0000 (0:00:01.904) 0:41:21.976 ********* 2026-04-07 05:35:03.802089 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:03.802103 | orchestrator | 2026-04-07 05:35:03.802115 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-04-07 05:35:03.802127 | orchestrator | Tuesday 07 April 2026 05:34:11 +0000 (0:00:01.853) 0:41:23.830 ********* 2026-04-07 05:35:03.802140 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-04-07 05:35:03.802152 | orchestrator | 2026-04-07 05:35:03.802163 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 05:35:03.802175 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 05:35:03.802186 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-04-07 05:35:03.802198 | orchestrator | testbed-node-0 : ok=121  changed=10  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-04-07 05:35:03.802238 | orchestrator | testbed-node-1 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-04-07 05:35:03.802250 | orchestrator | testbed-node-2 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-04-07 05:35:03.802264 | orchestrator | testbed-node-3 : ok=33  changed=2  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-04-07 05:35:03.802277 | orchestrator | testbed-node-4 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-04-07 05:35:03.802290 | orchestrator | testbed-node-5 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-04-07 05:35:03.802303 | orchestrator | 2026-04-07 05:35:03.802316 | orchestrator | 2026-04-07 05:35:03.802330 | orchestrator | 2026-04-07 05:35:03.802343 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 05:35:03.802357 | orchestrator | Tuesday 07 April 2026 05:34:14 +0000 (0:00:02.723) 0:41:26.554 ********* 2026-04-07 05:35:03.802370 | orchestrator | =============================================================================== 2026-04-07 05:35:03.802383 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1880.36s 2026-04-07 05:35:03.802395 | orchestrator | Gather and delegate facts ---------------------------------------------- 31.06s 2026-04-07 05:35:03.802408 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.77s 2026-04-07 05:35:03.802421 | orchestrator | Set cluster configs ---------------------------------------------------- 10.60s 2026-04-07 05:35:03.802434 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.37s 2026-04-07 05:35:03.802466 | orchestrator | ceph-infra : Update cache for Debian based OSs -------------------------- 8.53s 2026-04-07 05:35:03.802478 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.95s 2026-04-07 05:35:03.802488 | orchestrator | Gather facts ------------------------------------------------------------ 6.56s 2026-04-07 05:35:03.802499 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 4.62s 2026-04-07 05:35:03.802510 | orchestrator | Stop ceph mon ----------------------------------------------------------- 3.98s 2026-04-07 05:35:03.802521 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.86s 2026-04-07 05:35:03.802531 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 3.42s 2026-04-07 05:35:03.802542 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.40s 2026-04-07 05:35:03.802553 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 3.39s 2026-04-07 05:35:03.802564 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.37s 2026-04-07 05:35:03.802574 | orchestrator | Exit playbook, if user did not mean to upgrade cluster ------------------ 3.27s 2026-04-07 05:35:03.802585 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 3.24s 2026-04-07 05:35:03.802596 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 3.13s 2026-04-07 05:35:03.802642 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 3.07s 2026-04-07 05:35:03.802653 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 3.06s 2026-04-07 05:35:03.802664 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-07 05:35:03.802676 | orchestrator | 2.16.14 2026-04-07 05:35:03.802688 | orchestrator | 2026-04-07 05:35:03.802698 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-04-07 05:35:03.802710 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-07 05:35:03.802742 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-07 05:35:03.802775 | orchestrator | 2026-04-07 05:35:03.802786 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-04-07 05:35:03.802797 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-07 05:35:03.802808 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-07 05:35:03.802829 | orchestrator | Tuesday 07 April 2026 05:34:22 +0000 (0:00:01.195) 0:00:01.195 ********* 2026-04-07 05:35:03.802840 | orchestrator | skipping: [localhost] 2026-04-07 05:35:03.802851 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-04-07 05:35:03.802862 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-04-07 05:35:03.802873 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-04-07 05:35:03.802883 | orchestrator | 2026-04-07 05:35:03.802894 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-04-07 05:35:03.802905 | orchestrator | 2026-04-07 05:35:03.802916 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-04-07 05:35:03.802927 | orchestrator | Tuesday 07 April 2026 05:34:23 +0000 (0:00:00.767) 0:00:01.963 ********* 2026-04-07 05:35:03.802938 | orchestrator | ok: [testbed-node-0] => { 2026-04-07 05:35:03.802949 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 05:35:03.802960 | orchestrator | } 2026-04-07 05:35:03.802971 | orchestrator | ok: [testbed-node-1] => { 2026-04-07 05:35:03.802982 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 05:35:03.802993 | orchestrator | } 2026-04-07 05:35:03.803004 | orchestrator | ok: [testbed-node-2] => { 2026-04-07 05:35:03.803014 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 05:35:03.803025 | orchestrator | } 2026-04-07 05:35:03.803036 | orchestrator | ok: [testbed-node-3] => { 2026-04-07 05:35:03.803047 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 05:35:03.803058 | orchestrator | } 2026-04-07 05:35:03.803068 | orchestrator | ok: [testbed-node-4] => { 2026-04-07 05:35:03.803079 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 05:35:03.803090 | orchestrator | } 2026-04-07 05:35:03.803102 | orchestrator | ok: [testbed-node-5] => { 2026-04-07 05:35:03.803113 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 05:35:03.803123 | orchestrator | } 2026-04-07 05:35:03.803134 | orchestrator | ok: [testbed-manager] => { 2026-04-07 05:35:03.803145 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-07 05:35:03.803156 | orchestrator | } 2026-04-07 05:35:03.803167 | orchestrator | 2026-04-07 05:35:03.803178 | orchestrator | TASK [Gather facts] ************************************************************ 2026-04-07 05:35:03.803188 | orchestrator | Tuesday 07 April 2026 05:34:25 +0000 (0:00:02.354) 0:00:04.318 ********* 2026-04-07 05:35:03.803199 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:03.803210 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:03.803221 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:03.803232 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:03.803242 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:03.803253 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:03.803264 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:03.803275 | orchestrator | 2026-04-07 05:35:03.803286 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-04-07 05:35:03.803297 | orchestrator | Tuesday 07 April 2026 05:34:29 +0000 (0:00:04.184) 0:00:08.502 ********* 2026-04-07 05:35:03.803308 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 05:35:03.803324 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 05:35:03.803342 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:35:03.803354 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 05:35:03.803364 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 05:35:03.803375 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:35:03.803386 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:35:03.803397 | orchestrator | 2026-04-07 05:35:03.803408 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-04-07 05:35:03.803419 | orchestrator | Tuesday 07 April 2026 05:35:00 +0000 (0:00:30.587) 0:00:39.089 ********* 2026-04-07 05:35:03.803430 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:03.803441 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:03.803452 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:03.803463 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:03.803474 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:03.803485 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:03.803496 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:03.803506 | orchestrator | 2026-04-07 05:35:03.803517 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-07 05:35:03.803528 | orchestrator | Tuesday 07 April 2026 05:35:01 +0000 (0:00:01.011) 0:00:40.101 ********* 2026-04-07 05:35:03.803539 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-07 05:35:03.803551 | orchestrator | 2026-04-07 05:35:03.803562 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-07 05:35:03.803573 | orchestrator | Tuesday 07 April 2026 05:35:03 +0000 (0:00:01.856) 0:00:41.957 ********* 2026-04-07 05:35:03.803583 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:03.803594 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:03.803623 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:03.803641 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:15.801335 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:15.801481 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:15.801498 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:15.801510 | orchestrator | 2026-04-07 05:35:15.801524 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-07 05:35:15.801536 | orchestrator | Tuesday 07 April 2026 05:35:04 +0000 (0:00:01.287) 0:00:43.245 ********* 2026-04-07 05:35:15.801548 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:15.801558 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:15.801569 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:15.801644 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:15.801656 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:15.801666 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:15.801677 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:15.801688 | orchestrator | 2026-04-07 05:35:15.801699 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 05:35:15.801710 | orchestrator | Tuesday 07 April 2026 05:35:05 +0000 (0:00:00.792) 0:00:44.037 ********* 2026-04-07 05:35:15.801721 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:15.801732 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:15.801743 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:15.801753 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:15.801764 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:15.801775 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:15.801786 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:15.801797 | orchestrator | 2026-04-07 05:35:15.801809 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 05:35:15.801823 | orchestrator | Tuesday 07 April 2026 05:35:06 +0000 (0:00:01.286) 0:00:45.323 ********* 2026-04-07 05:35:15.801836 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:15.801893 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:15.801907 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:15.801920 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:15.801932 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:15.801944 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:15.801957 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:15.801969 | orchestrator | 2026-04-07 05:35:15.801982 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-07 05:35:15.801996 | orchestrator | Tuesday 07 April 2026 05:35:07 +0000 (0:00:00.694) 0:00:46.018 ********* 2026-04-07 05:35:15.802009 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:15.802082 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:15.802096 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:15.802109 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:15.802121 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:15.802133 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:15.802145 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:15.802158 | orchestrator | 2026-04-07 05:35:15.802171 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-07 05:35:15.802184 | orchestrator | Tuesday 07 April 2026 05:35:07 +0000 (0:00:00.782) 0:00:46.800 ********* 2026-04-07 05:35:15.802195 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:15.802206 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:15.802217 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:15.802228 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:15.802239 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:15.802249 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:15.802260 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:15.802271 | orchestrator | 2026-04-07 05:35:15.802282 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-07 05:35:15.802294 | orchestrator | Tuesday 07 April 2026 05:35:08 +0000 (0:00:00.673) 0:00:47.473 ********* 2026-04-07 05:35:15.802305 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:15.802316 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:15.802327 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:15.802338 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:15.802348 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:15.802359 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:15.802370 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:15.802380 | orchestrator | 2026-04-07 05:35:15.802391 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-07 05:35:15.802416 | orchestrator | Tuesday 07 April 2026 05:35:09 +0000 (0:00:00.832) 0:00:48.305 ********* 2026-04-07 05:35:15.802427 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:15.802438 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:15.802448 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:15.802460 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:15.802471 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:15.802481 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:15.802492 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:15.802503 | orchestrator | 2026-04-07 05:35:15.802514 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-07 05:35:15.802525 | orchestrator | Tuesday 07 April 2026 05:35:10 +0000 (0:00:00.646) 0:00:48.952 ********* 2026-04-07 05:35:15.802536 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:35:15.802546 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:35:15.802558 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:35:15.802568 | orchestrator | 2026-04-07 05:35:15.802598 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-07 05:35:15.802610 | orchestrator | Tuesday 07 April 2026 05:35:11 +0000 (0:00:00.922) 0:00:49.874 ********* 2026-04-07 05:35:15.802621 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:15.802631 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:15.802642 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:15.802676 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:15.802688 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:15.802699 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:15.802709 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:15.802720 | orchestrator | 2026-04-07 05:35:15.802731 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-07 05:35:15.802742 | orchestrator | Tuesday 07 April 2026 05:35:11 +0000 (0:00:00.785) 0:00:50.660 ********* 2026-04-07 05:35:15.802753 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:35:15.802764 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:35:15.802775 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:35:15.802786 | orchestrator | 2026-04-07 05:35:15.802816 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-07 05:35:15.802828 | orchestrator | Tuesday 07 April 2026 05:35:14 +0000 (0:00:02.364) 0:00:53.024 ********* 2026-04-07 05:35:15.802839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 05:35:15.802851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 05:35:15.802861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 05:35:15.802872 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:15.802883 | orchestrator | 2026-04-07 05:35:15.802894 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-07 05:35:15.802905 | orchestrator | Tuesday 07 April 2026 05:35:14 +0000 (0:00:00.439) 0:00:53.464 ********* 2026-04-07 05:35:15.802917 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 05:35:15.802932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 05:35:15.802944 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 05:35:15.802955 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:15.802966 | orchestrator | 2026-04-07 05:35:15.802977 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-07 05:35:15.802988 | orchestrator | Tuesday 07 April 2026 05:35:15 +0000 (0:00:00.907) 0:00:54.371 ********* 2026-04-07 05:35:15.803014 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:15.803028 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:15.803045 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:15.803065 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:15.803076 | orchestrator | 2026-04-07 05:35:15.803087 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-07 05:35:15.803099 | orchestrator | Tuesday 07 April 2026 05:35:15 +0000 (0:00:00.152) 0:00:54.524 ********* 2026-04-07 05:35:15.803111 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5d7151ccbc56', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-07 05:35:12.485417', 'end': '2026-04-07 05:35:12.555599', 'delta': '0:00:00.070182', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5d7151ccbc56'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-07 05:35:15.803134 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e8d9f46c7c23', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-07 05:35:13.078821', 'end': '2026-04-07 05:35:13.129153', 'delta': '0:00:00.050332', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8d9f46c7c23'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-07 05:35:29.919818 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f4f6ca89ad43', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-07 05:35:13.971379', 'end': '2026-04-07 05:35:14.025265', 'delta': '0:00:00.053886', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f4f6ca89ad43'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-07 05:35:29.919901 | orchestrator | 2026-04-07 05:35:29.919909 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-07 05:35:29.919915 | orchestrator | Tuesday 07 April 2026 05:35:15 +0000 (0:00:00.216) 0:00:54.740 ********* 2026-04-07 05:35:29.919919 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:29.919924 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:29.919927 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:29.919931 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:29.919935 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:29.919939 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:29.919943 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:29.919946 | orchestrator | 2026-04-07 05:35:29.919951 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-07 05:35:29.919955 | orchestrator | Tuesday 07 April 2026 05:35:17 +0000 (0:00:01.309) 0:00:56.050 ********* 2026-04-07 05:35:29.919958 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.919963 | orchestrator | 2026-04-07 05:35:29.919967 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-07 05:35:29.919970 | orchestrator | Tuesday 07 April 2026 05:35:17 +0000 (0:00:00.244) 0:00:56.295 ********* 2026-04-07 05:35:29.919974 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:29.920019 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:29.920023 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:29.920027 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:29.920031 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:29.920034 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:29.920038 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:29.920042 | orchestrator | 2026-04-07 05:35:29.920045 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-07 05:35:29.920049 | orchestrator | Tuesday 07 April 2026 05:35:18 +0000 (0:00:01.058) 0:00:57.353 ********* 2026-04-07 05:35:29.920053 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:29.920057 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-07 05:35:29.920061 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-07 05:35:29.920065 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-07 05:35:29.920069 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-07 05:35:29.920073 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-07 05:35:29.920095 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-07 05:35:29.920099 | orchestrator | 2026-04-07 05:35:29.920103 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 05:35:29.920107 | orchestrator | Tuesday 07 April 2026 05:35:21 +0000 (0:00:02.609) 0:00:59.962 ********* 2026-04-07 05:35:29.920110 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:29.920114 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:29.920118 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:29.920122 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:29.920125 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:29.920129 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:29.920133 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:29.920136 | orchestrator | 2026-04-07 05:35:29.920140 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-07 05:35:29.920144 | orchestrator | Tuesday 07 April 2026 05:35:22 +0000 (0:00:01.007) 0:01:00.970 ********* 2026-04-07 05:35:29.920148 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920151 | orchestrator | 2026-04-07 05:35:29.920155 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-07 05:35:29.920159 | orchestrator | Tuesday 07 April 2026 05:35:22 +0000 (0:00:00.119) 0:01:01.089 ********* 2026-04-07 05:35:29.920163 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920166 | orchestrator | 2026-04-07 05:35:29.920170 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 05:35:29.920174 | orchestrator | Tuesday 07 April 2026 05:35:22 +0000 (0:00:00.246) 0:01:01.336 ********* 2026-04-07 05:35:29.920178 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920181 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:29.920185 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:29.920189 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:29.920192 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:29.920196 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:29.920200 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:29.920203 | orchestrator | 2026-04-07 05:35:29.920207 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-07 05:35:29.920211 | orchestrator | Tuesday 07 April 2026 05:35:23 +0000 (0:00:01.085) 0:01:02.422 ********* 2026-04-07 05:35:29.920215 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920218 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:29.920222 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:29.920226 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:29.920229 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:29.920233 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:29.920237 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:29.920240 | orchestrator | 2026-04-07 05:35:29.920244 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-07 05:35:29.920262 | orchestrator | Tuesday 07 April 2026 05:35:24 +0000 (0:00:01.047) 0:01:03.469 ********* 2026-04-07 05:35:29.920266 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920270 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:29.920273 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:29.920277 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:29.920281 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:29.920284 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:29.920288 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:29.920292 | orchestrator | 2026-04-07 05:35:29.920296 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-07 05:35:29.920299 | orchestrator | Tuesday 07 April 2026 05:35:25 +0000 (0:00:00.992) 0:01:04.462 ********* 2026-04-07 05:35:29.920303 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920307 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:29.920310 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:29.920314 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:29.920318 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:29.920322 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:29.920325 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:29.920329 | orchestrator | 2026-04-07 05:35:29.920333 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-07 05:35:29.920336 | orchestrator | Tuesday 07 April 2026 05:35:26 +0000 (0:00:00.921) 0:01:05.383 ********* 2026-04-07 05:35:29.920340 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920344 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:29.920348 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:29.920351 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:29.920355 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:29.920359 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:29.920362 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:29.920366 | orchestrator | 2026-04-07 05:35:29.920370 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-07 05:35:29.920374 | orchestrator | Tuesday 07 April 2026 05:35:27 +0000 (0:00:01.250) 0:01:06.634 ********* 2026-04-07 05:35:29.920377 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920381 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:29.920386 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:29.920390 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:29.920394 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:29.920399 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:29.920403 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:29.920407 | orchestrator | 2026-04-07 05:35:29.920411 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-07 05:35:29.920417 | orchestrator | Tuesday 07 April 2026 05:35:28 +0000 (0:00:00.859) 0:01:07.493 ********* 2026-04-07 05:35:29.920421 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:29.920425 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:29.920430 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:29.920434 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:29.920438 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:29.920443 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:29.920447 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:29.920451 | orchestrator | 2026-04-07 05:35:29.920456 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-07 05:35:29.920461 | orchestrator | Tuesday 07 April 2026 05:35:29 +0000 (0:00:01.148) 0:01:08.642 ********* 2026-04-07 05:35:29.920469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:29.920479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:29.920484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:29.920490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 05:35:29.920499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cddfb89c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:30.140413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140432 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:30.140456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 05:35:30.140497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.140536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '36ff44a1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:30.387230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 05:35:30.387451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387486 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:30.387618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb3b1ac7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:30.387648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.387684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a', 'dm-uuid-LVM-bglYLCxgkD3Qei681bqPmMF5XF5Cd1MSWl8BDXhbFTKiwBIAb3oEgAczEGV9LXaZ'], 'uuids': ['f2bf8803-d65d-44f0-ad5c-6b3f26298c9c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '99243621', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ']}})  2026-04-07 05:35:30.387707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc', 'scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0766011', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:30.543921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KbQcdi-US6m-bhDi-eJCV-lYyz-1b3q-6dXcPl', 'scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc', 'scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7a8fe78b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a']}})  2026-04-07 05:35:30.544012 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:30.544027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.544040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.544050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 05:35:30.544060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.544070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i', 'dm-uuid-CRYPT-LUKS2-4ff33acd7a6c412b9d804fdff86f67b2-z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 05:35:30.544080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.544124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a', 'dm-uuid-LVM-Iy8rcFTCo5W5yRGOTreEEQjp17ko3Q41z5GT9DF2n3y0jUXUATRgcvWUva5Hkl5i'], 'uuids': ['4ff33acd-7a6c-412b-9d80-4fdff86f67b2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7a8fe78b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i']}})  2026-04-07 05:35:30.544141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kNGUrC-NTT1-tndE-pJPs-WGt9-udV7-3Eh5Id', 'scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539', 'scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99243621', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a']}})  2026-04-07 05:35:30.544151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.544160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.544179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca08a9c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:30.758479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09', 'dm-uuid-LVM-bMsdwvKXiGbLYxQ2sqen2wd8SFVCxkJLQE7kiiwsLEGhL2FNSj6gPgLd2pZMGUoL'], 'uuids': ['7025c1bb-400d-47b2-a45c-5776ba2915d5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '62e8e967', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL']}})  2026-04-07 05:35:30.758650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.758672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc', 'scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4ea74e91', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:30.758686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.758698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ', 'dm-uuid-CRYPT-LUKS2-f2bf8803d65d44f0ad5c6b3f26298c9c-Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 05:35:30.758712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-uFQjDD-6Vwu-b0Df-kkau-8GoO-290Z-GefUFg', 'scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c', 'scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cf020a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f']}})  2026-04-07 05:35:30.758746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.758778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.758823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 05:35:30.758838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.758849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8', 'dm-uuid-CRYPT-LUKS2-ba89526f6e2c46628e82906f3c013265-9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 05:35:30.758861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.758873 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:30.758916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f', 'dm-uuid-LVM-AwooBDvX7rFetLSgq1Ce0QV9OX4RcM369HiqdQcuvs1yzXwuZ5Vmxo8NwSkEMSV8'], 'uuids': ['ba89526f-6e2c-4662-8e82-906f3c013265'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cf020a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8']}})  2026-04-07 05:35:30.758940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-C8sbvR-d1U1-401x-XxcV-6mPF-9ypK-VoR24u', 'scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f', 'scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62e8e967', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09']}})  2026-04-07 05:35:30.758964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.873208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdec1fc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:30.873312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.873330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.873365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL', 'dm-uuid-CRYPT-LUKS2-7025c1bb400d47b2a45c5776ba2915d5-QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 05:35:30.873379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.873461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d', 'dm-uuid-LVM-70S3mOSclp5fTNOIhfFxohdLg5UX463GstIgbONbBmukx2iBeuHV5bIO1Eujm1WX'], 'uuids': ['699c8f07-94ac-4c9a-a0de-024156723f9a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '45504c97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX']}})  2026-04-07 05:35:30.873479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599', 'scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b27a0136', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:30.873493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UDOgFQ-qFyi-gVi2-LQBC-OZQf-u9TS-0kON4x', 'scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7', 'scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b14e5d9d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2']}})  2026-04-07 05:35:30.873505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.873542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.873645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 05:35:30.873660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:30.873690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8', 'dm-uuid-CRYPT-LUKS2-4e91966f3ea449a98c6c9031afa42b57-3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 05:35:31.050767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.050878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2', 'dm-uuid-LVM-T2EndjdOS29FjzC5jDtGOSk25DBRWo663ZaUiQtM2GjT3MT0SRy5sJS0UQzmoSs8'], 'uuids': ['4e91966f-3ea4-49a9-8c6c-9031afa42b57'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b14e5d9d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8']}})  2026-04-07 05:35:31.050909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ISxGDZ-smz1-74tU-v9PH-Tqzx-sLKc-qKqsod', 'scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d', 'scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45504c97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d']}})  2026-04-07 05:35:31.050996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.051050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2524aa84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:31.051068 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:31.051082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.051094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.051106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX', 'dm-uuid-CRYPT-LUKS2-699c8f0794ac4c9aa0de024156723f9a-stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-07 05:35:31.051144 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:31.051214 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.051228 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.051240 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.051251 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-24-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 05:35:31.051291 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.580706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.580821 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.580848 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f80bc7fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part16', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part14', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part15', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part1', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:35:31.580969 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.581011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:35:31.581031 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:31.581050 | orchestrator | 2026-04-07 05:35:31.581068 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-07 05:35:31.581086 | orchestrator | Tuesday 07 April 2026 05:35:31 +0000 (0:00:01.386) 0:01:10.028 ********* 2026-04-07 05:35:31.581128 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.581148 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.581178 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.581196 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.581215 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.581240 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.581269 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800595 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cddfb89c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800724 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800798 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800816 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:31.800851 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800863 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800884 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800897 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800910 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800921 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800939 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.800963 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '36ff44a1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1', 'scsi-SQEMU_QEMU_HARDDISK_36ff44a1-7c72-437b-8a26-984714c4230e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980022 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980134 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980151 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:31.980165 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980177 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980208 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980222 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980252 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980265 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980282 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980297 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb3b1ac7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb3b1ac7-71e4-4418-bc53-c930c1882772-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:31.980377 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302151 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302277 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:32.302296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a', 'dm-uuid-LVM-bglYLCxgkD3Qei681bqPmMF5XF5Cd1MSWl8BDXhbFTKiwBIAb3oEgAczEGV9LXaZ'], 'uuids': ['f2bf8803-d65d-44f0-ad5c-6b3f26298c9c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '99243621', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc', 'scsi-SQEMU_QEMU_HARDDISK_d0766011-b4d1-4704-bfcf-26d11fc4e2cc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0766011', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KbQcdi-US6m-bhDi-eJCV-lYyz-1b3q-6dXcPl', 'scsi-0QEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc', 'scsi-SQEMU_QEMU_HARDDISK_7a8fe78b-90ad-4857-b477-d40f4ed756fc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7a8fe78b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302446 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09', 'dm-uuid-LVM-bMsdwvKXiGbLYxQ2sqen2wd8SFVCxkJLQE7kiiwsLEGhL2FNSj6gPgLd2pZMGUoL'], 'uuids': ['7025c1bb-400d-47b2-a45c-5776ba2915d5'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '62e8e967', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.302477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc', 'scsi-SQEMU_QEMU_HARDDISK_4ea74e91-c20c-41f1-919c-d0143e478dbc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4ea74e91', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.408884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.408992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-uFQjDD-6Vwu-b0Df-kkau-8GoO-290Z-GefUFg', 'scsi-0QEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c', 'scsi-SQEMU_QEMU_HARDDISK_cf020a49-c89e-40cb-ad7e-e7245d038c5c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cf020a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i', 'dm-uuid-CRYPT-LUKS2-4ff33acd7a6c412b9d804fdff86f67b2-z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409058 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44abcd21--31e3--595d--ad07--7c010500a60a-osd--block--44abcd21--31e3--595d--ad07--7c010500a60a', 'dm-uuid-LVM-Iy8rcFTCo5W5yRGOTreEEQjp17ko3Q41z5GT9DF2n3y0jUXUATRgcvWUva5Hkl5i'], 'uuids': ['4ff33acd-7a6c-412b-9d80-4fdff86f67b2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7a8fe78b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['z5GT9D-F2n3-y0jU-XUAT-Rgcv-WUva-5Hkl5i']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kNGUrC-NTT1-tndE-pJPs-WGt9-udV7-3Eh5Id', 'scsi-0QEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539', 'scsi-SQEMU_QEMU_HARDDISK_99243621-e146-4726-8289-3c034b504539'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '99243621', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--116f5715--f5f6--56e4--87eb--3f2be33e5f2a-osd--block--116f5715--f5f6--56e4--87eb--3f2be33e5f2a']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409154 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409166 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.409191 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8', 'dm-uuid-CRYPT-LUKS2-ba89526f6e2c46628e82906f3c013265-9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.529733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca08a9c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_aca08a9c-83bc-497a-93bb-837b1de894dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.529838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.529856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.529905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ccafa0da--42f8--5022--b95e--1902d46c646f-osd--block--ccafa0da--42f8--5022--b95e--1902d46c646f', 'dm-uuid-LVM-AwooBDvX7rFetLSgq1Ce0QV9OX4RcM369HiqdQcuvs1yzXwuZ5Vmxo8NwSkEMSV8'], 'uuids': ['ba89526f-6e2c-4662-8e82-906f3c013265'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cf020a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9HiqdQ-cuvs-1yzX-wuZ5-Vmxo-8NwS-kEMSV8']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.529941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.529954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-C8sbvR-d1U1-401x-XxcV-6mPF-9ypK-VoR24u', 'scsi-0QEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f', 'scsi-SQEMU_QEMU_HARDDISK_62e8e967-b9fa-4acb-b372-c409143b479f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62e8e967', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8941099b--00de--50f1--81f7--f26159704c09-osd--block--8941099b--00de--50f1--81f7--f26159704c09']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.529967 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.529979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ', 'dm-uuid-CRYPT-LUKS2-f2bf8803d65d44f0ad5c6b3f26298c9c-Wl8BDX-hbFT-KiwB-IAb3-oEgA-czEG-V9LXaZ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.530007 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdec1fc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfdec1fc-6534-4f16-a48b-f139f04a1945-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742304 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742420 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d', 'dm-uuid-LVM-70S3mOSclp5fTNOIhfFxohdLg5UX463GstIgbONbBmukx2iBeuHV5bIO1Eujm1WX'], 'uuids': ['699c8f07-94ac-4c9a-a0de-024156723f9a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '45504c97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742478 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599', 'scsi-SQEMU_QEMU_HARDDISK_b27a0136-39f6-47a5-af08-be8e3f686599'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b27a0136', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL', 'dm-uuid-CRYPT-LUKS2-7025c1bb400d47b2a45c5776ba2915d5-QE7kii-wsLE-GhL2-FNSj-6gPg-Ld2p-ZMGUoL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742537 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UDOgFQ-qFyi-gVi2-LQBC-OZQf-u9TS-0kON4x', 'scsi-0QEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7', 'scsi-SQEMU_QEMU_HARDDISK_b14e5d9d-4ad1-4026-8a9c-5dfff539a0a7'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b14e5d9d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742619 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742680 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:32.742695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742707 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:32.742718 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.742740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8', 'dm-uuid-CRYPT-LUKS2-4e91966f3ea449a98c6c9031afa42b57-3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.767973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768075 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--754aebfc--d76c--537f--941d--8ad36483cdb2-osd--block--754aebfc--d76c--537f--941d--8ad36483cdb2', 'dm-uuid-LVM-T2EndjdOS29FjzC5jDtGOSk25DBRWo663ZaUiQtM2GjT3MT0SRy5sJS0UQzmoSs8'], 'uuids': ['4e91966f-3ea4-49a9-8c6c-9031afa42b57'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b14e5d9d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3ZaUiQ-tM2G-jT3M-T0SR-y5sJ-S0UQ-zmoSs8']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768130 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ISxGDZ-smz1-74tU-v9PH-Tqzx-sLKc-qKqsod', 'scsi-0QEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d', 'scsi-SQEMU_QEMU_HARDDISK_45504c97-465e-453c-b9da-4a892d5e284d'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45504c97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ed7b856a--23c6--522d--bad3--e57b6a18196d-osd--block--ed7b856a--23c6--522d--bad3--e57b6a18196d']}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768149 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768184 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2524aa84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1', 'scsi-SQEMU_QEMU_HARDDISK_2524aa84-ef66-48f2-a92a-bce47df89de2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768211 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768224 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768248 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:32.768269 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.606739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX', 'dm-uuid-CRYPT-LUKS2-699c8f0794ac4c9aa0de024156723f9a-stIgbO-NbBm-ukx2-iBeu-HV5b-IO1E-ujm1WX'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.606878 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-24-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.606896 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:36.606921 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.606933 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.606942 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.606980 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f80bc7fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part16', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part14', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part15', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part1', 'scsi-SQEMU_QEMU_HARDDISK_f80bc7fe-963f-46da-995b-2ace11698774-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.607001 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.607011 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:35:36.607020 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:36.607030 | orchestrator | 2026-04-07 05:35:36.607040 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-07 05:35:36.607051 | orchestrator | Tuesday 07 April 2026 05:35:32 +0000 (0:00:01.751) 0:01:11.780 ********* 2026-04-07 05:35:36.607060 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:36.607069 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:36.607078 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:36.607086 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:36.607095 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:36.607104 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:36.607112 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:36.607121 | orchestrator | 2026-04-07 05:35:36.607131 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-07 05:35:36.607140 | orchestrator | Tuesday 07 April 2026 05:35:34 +0000 (0:00:01.665) 0:01:13.445 ********* 2026-04-07 05:35:36.607149 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:36.607157 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:36.607166 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:36.607175 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:36.607184 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:36.607192 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:36.607201 | orchestrator | ok: [testbed-manager] 2026-04-07 05:35:36.607216 | orchestrator | 2026-04-07 05:35:36.607231 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 05:35:36.607246 | orchestrator | Tuesday 07 April 2026 05:35:35 +0000 (0:00:00.767) 0:01:14.213 ********* 2026-04-07 05:35:36.607262 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:35:36.607277 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:35:36.607288 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:35:36.607298 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:36.607308 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:36.607319 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:36.607336 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:49.002194 | orchestrator | 2026-04-07 05:35:49.002277 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 05:35:49.002287 | orchestrator | Tuesday 07 April 2026 05:35:36 +0000 (0:00:01.317) 0:01:15.530 ********* 2026-04-07 05:35:49.002294 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:49.002302 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:49.002319 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:49.002332 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.002339 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:49.002345 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:49.002352 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:49.002358 | orchestrator | 2026-04-07 05:35:49.002365 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 05:35:49.002372 | orchestrator | Tuesday 07 April 2026 05:35:37 +0000 (0:00:00.757) 0:01:16.288 ********* 2026-04-07 05:35:49.002379 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:49.002385 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:49.002392 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:49.002398 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.002405 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:49.002411 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:49.002417 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-04-07 05:35:49.002424 | orchestrator | 2026-04-07 05:35:49.002431 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 05:35:49.002437 | orchestrator | Tuesday 07 April 2026 05:35:39 +0000 (0:00:01.552) 0:01:17.841 ********* 2026-04-07 05:35:49.002443 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:49.002448 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:49.002454 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:49.002460 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.002466 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:49.002473 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:49.002479 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:49.002485 | orchestrator | 2026-04-07 05:35:49.002491 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 05:35:49.002498 | orchestrator | Tuesday 07 April 2026 05:35:39 +0000 (0:00:00.764) 0:01:18.606 ********* 2026-04-07 05:35:49.002504 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:35:49.002564 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-07 05:35:49.002572 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 05:35:49.002578 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-07 05:35:49.002584 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-07 05:35:49.002590 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 05:35:49.002597 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-07 05:35:49.002602 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-07 05:35:49.002609 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-07 05:35:49.002615 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-07 05:35:49.002622 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-07 05:35:49.002627 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-07 05:35:49.002652 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-07 05:35:49.002658 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-07 05:35:49.002664 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-07 05:35:49.002670 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-07 05:35:49.002712 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-07 05:35:49.002720 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-07 05:35:49.002726 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-07 05:35:49.002732 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-07 05:35:49.002738 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-07 05:35:49.002744 | orchestrator | 2026-04-07 05:35:49.002751 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 05:35:49.002757 | orchestrator | Tuesday 07 April 2026 05:35:41 +0000 (0:00:01.999) 0:01:20.605 ********* 2026-04-07 05:35:49.002765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 05:35:49.002771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 05:35:49.002777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 05:35:49.002783 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:49.002790 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-07 05:35:49.002796 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-07 05:35:49.002803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-07 05:35:49.002809 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:49.002815 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-07 05:35:49.002822 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-07 05:35:49.002828 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-07 05:35:49.002834 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:49.002840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-07 05:35:49.002846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-07 05:35:49.002852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-07 05:35:49.002858 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.002865 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-07 05:35:49.002872 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-07 05:35:49.002878 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-07 05:35:49.002884 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:49.002890 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-07 05:35:49.002913 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-07 05:35:49.002919 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-07 05:35:49.002925 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:49.002932 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-07 05:35:49.002938 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-07 05:35:49.002944 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-07 05:35:49.002951 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:49.002957 | orchestrator | 2026-04-07 05:35:49.002964 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-07 05:35:49.002971 | orchestrator | Tuesday 07 April 2026 05:35:42 +0000 (0:00:00.909) 0:01:21.515 ********* 2026-04-07 05:35:49.002978 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:35:49.002984 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:35:49.002990 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:35:49.002996 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:35:49.003003 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 05:35:49.003019 | orchestrator | 2026-04-07 05:35:49.003026 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 05:35:49.003033 | orchestrator | Tuesday 07 April 2026 05:35:44 +0000 (0:00:01.333) 0:01:22.848 ********* 2026-04-07 05:35:49.003039 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.003045 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:49.003051 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:49.003057 | orchestrator | 2026-04-07 05:35:49.003064 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 05:35:49.003070 | orchestrator | Tuesday 07 April 2026 05:35:44 +0000 (0:00:00.594) 0:01:23.443 ********* 2026-04-07 05:35:49.003076 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.003083 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:49.003089 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:49.003096 | orchestrator | 2026-04-07 05:35:49.003102 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 05:35:49.003113 | orchestrator | Tuesday 07 April 2026 05:35:44 +0000 (0:00:00.385) 0:01:23.829 ********* 2026-04-07 05:35:49.003119 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.003125 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:35:49.003131 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:35:49.003137 | orchestrator | 2026-04-07 05:35:49.003142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 05:35:49.003148 | orchestrator | Tuesday 07 April 2026 05:35:45 +0000 (0:00:00.394) 0:01:24.223 ********* 2026-04-07 05:35:49.003154 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:49.003160 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:49.003166 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:49.003172 | orchestrator | 2026-04-07 05:35:49.003177 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 05:35:49.003183 | orchestrator | Tuesday 07 April 2026 05:35:45 +0000 (0:00:00.469) 0:01:24.693 ********* 2026-04-07 05:35:49.003189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 05:35:49.003195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 05:35:49.003200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 05:35:49.003206 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.003212 | orchestrator | 2026-04-07 05:35:49.003218 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 05:35:49.003224 | orchestrator | Tuesday 07 April 2026 05:35:46 +0000 (0:00:00.407) 0:01:25.100 ********* 2026-04-07 05:35:49.003229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 05:35:49.003235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 05:35:49.003241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 05:35:49.003247 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.003252 | orchestrator | 2026-04-07 05:35:49.003258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 05:35:49.003264 | orchestrator | Tuesday 07 April 2026 05:35:46 +0000 (0:00:00.417) 0:01:25.518 ********* 2026-04-07 05:35:49.003269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-07 05:35:49.003275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-07 05:35:49.003281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-07 05:35:49.003287 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:35:49.003292 | orchestrator | 2026-04-07 05:35:49.003299 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 05:35:49.003305 | orchestrator | Tuesday 07 April 2026 05:35:47 +0000 (0:00:00.712) 0:01:26.230 ********* 2026-04-07 05:35:49.003310 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:35:49.003316 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:35:49.003327 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:35:49.003333 | orchestrator | 2026-04-07 05:35:49.003339 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 05:35:49.003345 | orchestrator | Tuesday 07 April 2026 05:35:47 +0000 (0:00:00.607) 0:01:26.837 ********* 2026-04-07 05:35:49.003351 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-07 05:35:49.003357 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-07 05:35:49.003363 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-07 05:35:49.003369 | orchestrator | 2026-04-07 05:35:49.003374 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-07 05:35:49.003380 | orchestrator | Tuesday 07 April 2026 05:35:48 +0000 (0:00:00.555) 0:01:27.393 ********* 2026-04-07 05:35:49.003386 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:35:49.003393 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:35:49.003401 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:35:49.003411 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 05:36:18.998866 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 05:36:18.998979 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 05:36:18.998995 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 05:36:18.999008 | orchestrator | 2026-04-07 05:36:18.999021 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-07 05:36:18.999034 | orchestrator | Tuesday 07 April 2026 05:35:49 +0000 (0:00:00.826) 0:01:28.219 ********* 2026-04-07 05:36:18.999045 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:36:18.999056 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:36:18.999068 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:36:18.999079 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 05:36:18.999090 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 05:36:18.999101 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 05:36:18.999111 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 05:36:18.999122 | orchestrator | 2026-04-07 05:36:18.999134 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-04-07 05:36:18.999144 | orchestrator | Tuesday 07 April 2026 05:35:51 +0000 (0:00:02.226) 0:01:30.446 ********* 2026-04-07 05:36:18.999155 | orchestrator | changed: [testbed-node-3] 2026-04-07 05:36:18.999167 | orchestrator | changed: [testbed-node-4] 2026-04-07 05:36:18.999178 | orchestrator | changed: [testbed-node-5] 2026-04-07 05:36:18.999188 | orchestrator | changed: [testbed-manager] 2026-04-07 05:36:18.999199 | orchestrator | changed: [testbed-node-1] 2026-04-07 05:36:18.999210 | orchestrator | changed: [testbed-node-2] 2026-04-07 05:36:18.999220 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:36:18.999231 | orchestrator | 2026-04-07 05:36:18.999259 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-04-07 05:36:18.999271 | orchestrator | Tuesday 07 April 2026 05:36:01 +0000 (0:00:09.998) 0:01:40.445 ********* 2026-04-07 05:36:18.999282 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:18.999292 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:18.999303 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:18.999314 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:18.999325 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:18.999335 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:18.999346 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:18.999357 | orchestrator | 2026-04-07 05:36:18.999389 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-04-07 05:36:18.999401 | orchestrator | Tuesday 07 April 2026 05:36:02 +0000 (0:00:01.058) 0:01:41.504 ********* 2026-04-07 05:36:18.999413 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:18.999427 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:18.999439 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:18.999452 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:18.999464 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:18.999501 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:18.999514 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:18.999528 | orchestrator | 2026-04-07 05:36:18.999544 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-04-07 05:36:18.999564 | orchestrator | Tuesday 07 April 2026 05:36:03 +0000 (0:00:00.752) 0:01:42.256 ********* 2026-04-07 05:36:18.999583 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:18.999601 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:36:18.999619 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:36:18.999639 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:36:18.999658 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:36:18.999676 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:36:18.999695 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:36:18.999713 | orchestrator | 2026-04-07 05:36:18.999733 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-04-07 05:36:18.999753 | orchestrator | Tuesday 07 April 2026 05:36:05 +0000 (0:00:02.405) 0:01:44.662 ********* 2026-04-07 05:36:18.999775 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-07 05:36:18.999797 | orchestrator | 2026-04-07 05:36:18.999809 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-04-07 05:36:18.999820 | orchestrator | Tuesday 07 April 2026 05:36:07 +0000 (0:00:01.948) 0:01:46.610 ********* 2026-04-07 05:36:18.999831 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:18.999842 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:18.999853 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:18.999864 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:18.999874 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:18.999885 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:18.999895 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:18.999906 | orchestrator | 2026-04-07 05:36:18.999917 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-04-07 05:36:18.999930 | orchestrator | Tuesday 07 April 2026 05:36:08 +0000 (0:00:01.008) 0:01:47.618 ********* 2026-04-07 05:36:18.999947 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:18.999965 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:18.999985 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:18.999996 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.000009 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.000028 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.000046 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.000066 | orchestrator | 2026-04-07 05:36:19.000085 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-04-07 05:36:19.000104 | orchestrator | Tuesday 07 April 2026 05:36:09 +0000 (0:00:01.006) 0:01:48.625 ********* 2026-04-07 05:36:19.000147 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.000168 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.000187 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.000204 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.000216 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.000248 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.000259 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.000270 | orchestrator | 2026-04-07 05:36:19.000281 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-04-07 05:36:19.000305 | orchestrator | Tuesday 07 April 2026 05:36:10 +0000 (0:00:00.878) 0:01:49.503 ********* 2026-04-07 05:36:19.000316 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.000327 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.000338 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.000348 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.000359 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.000369 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.000380 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.000390 | orchestrator | 2026-04-07 05:36:19.000401 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-04-07 05:36:19.000412 | orchestrator | Tuesday 07 April 2026 05:36:11 +0000 (0:00:01.110) 0:01:50.614 ********* 2026-04-07 05:36:19.000423 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.000433 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.000444 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.000454 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.000465 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.000495 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.000507 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.000517 | orchestrator | 2026-04-07 05:36:19.000529 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-04-07 05:36:19.000540 | orchestrator | Tuesday 07 April 2026 05:36:12 +0000 (0:00:00.841) 0:01:51.456 ********* 2026-04-07 05:36:19.000551 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.000562 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.000573 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.000583 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.000608 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.000628 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.000640 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.000650 | orchestrator | 2026-04-07 05:36:19.000661 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-04-07 05:36:19.000672 | orchestrator | Tuesday 07 April 2026 05:36:13 +0000 (0:00:01.028) 0:01:52.484 ********* 2026-04-07 05:36:19.000683 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.000694 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.000705 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.000715 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.000726 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.000736 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.000747 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.000758 | orchestrator | 2026-04-07 05:36:19.000769 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-04-07 05:36:19.000779 | orchestrator | Tuesday 07 April 2026 05:36:14 +0000 (0:00:00.801) 0:01:53.285 ********* 2026-04-07 05:36:19.000790 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.000801 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.000811 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.000822 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.000833 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.000843 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.000859 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.000878 | orchestrator | 2026-04-07 05:36:19.000897 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-04-07 05:36:19.000917 | orchestrator | Tuesday 07 April 2026 05:36:15 +0000 (0:00:01.094) 0:01:54.380 ********* 2026-04-07 05:36:19.000938 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.000959 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.000981 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.001002 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.001022 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.001054 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.001076 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.001096 | orchestrator | 2026-04-07 05:36:19.001108 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-04-07 05:36:19.001118 | orchestrator | Tuesday 07 April 2026 05:36:16 +0000 (0:00:00.800) 0:01:55.181 ********* 2026-04-07 05:36:19.001129 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.001140 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.001150 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.001161 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.001172 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.001620 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.001646 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.001665 | orchestrator | 2026-04-07 05:36:19.001684 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-04-07 05:36:19.001705 | orchestrator | Tuesday 07 April 2026 05:36:17 +0000 (0:00:01.006) 0:01:56.187 ********* 2026-04-07 05:36:19.001724 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.001743 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.001760 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.001794 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.001805 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.001822 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:19.001841 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:19.001859 | orchestrator | 2026-04-07 05:36:19.001879 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-04-07 05:36:19.001920 | orchestrator | Tuesday 07 April 2026 05:36:18 +0000 (0:00:00.977) 0:01:57.164 ********* 2026-04-07 05:36:19.001941 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:19.001959 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:19.001978 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:19.001998 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:19.002091 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:19.002191 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.223776 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.223899 | orchestrator | 2026-04-07 05:36:29.223926 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-04-07 05:36:29.223940 | orchestrator | Tuesday 07 April 2026 05:36:19 +0000 (0:00:00.790) 0:01:57.954 ********* 2026-04-07 05:36:29.223951 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:29.223962 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:29.223973 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:29.223985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 05:36:29.223997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 05:36:29.224008 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.224019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 05:36:29.224030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 05:36:29.224041 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.224052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 05:36:29.224062 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 05:36:29.224073 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.224109 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.224121 | orchestrator | 2026-04-07 05:36:29.224148 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-04-07 05:36:29.224159 | orchestrator | Tuesday 07 April 2026 05:36:20 +0000 (0:00:01.229) 0:01:59.184 ********* 2026-04-07 05:36:29.224170 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:29.224180 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:29.224191 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:29.224202 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.224212 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.224223 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.224234 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.224244 | orchestrator | 2026-04-07 05:36:29.224255 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-04-07 05:36:29.224266 | orchestrator | Tuesday 07 April 2026 05:36:21 +0000 (0:00:00.842) 0:02:00.027 ********* 2026-04-07 05:36:29.224276 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:29.224287 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:29.224297 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:29.224308 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.224319 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.224329 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.224340 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.224350 | orchestrator | 2026-04-07 05:36:29.224361 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-04-07 05:36:29.224372 | orchestrator | Tuesday 07 April 2026 05:36:22 +0000 (0:00:01.317) 0:02:01.345 ********* 2026-04-07 05:36:29.224382 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:29.224393 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:29.224404 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:29.224414 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.224425 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.224435 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.224446 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.224488 | orchestrator | 2026-04-07 05:36:29.224500 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-04-07 05:36:29.224511 | orchestrator | Tuesday 07 April 2026 05:36:23 +0000 (0:00:00.797) 0:02:02.142 ********* 2026-04-07 05:36:29.224522 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:29.224533 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:29.224543 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:29.224554 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.224565 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.224575 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.224586 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.224596 | orchestrator | 2026-04-07 05:36:29.224607 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-04-07 05:36:29.224618 | orchestrator | Tuesday 07 April 2026 05:36:24 +0000 (0:00:01.058) 0:02:03.201 ********* 2026-04-07 05:36:29.224629 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:29.224640 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:29.224650 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:29.224661 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.224671 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.224682 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.224693 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.224703 | orchestrator | 2026-04-07 05:36:29.224715 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-04-07 05:36:29.224725 | orchestrator | Tuesday 07 April 2026 05:36:25 +0000 (0:00:00.727) 0:02:03.929 ********* 2026-04-07 05:36:29.224736 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:29.224747 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:29.224770 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:29.224781 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.224791 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.224802 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.224813 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.224823 | orchestrator | 2026-04-07 05:36:29.224852 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-04-07 05:36:29.224864 | orchestrator | Tuesday 07 April 2026 05:36:26 +0000 (0:00:01.193) 0:02:05.122 ********* 2026-04-07 05:36:29.224875 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:29.224886 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:29.224896 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:29.224907 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:29.224918 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 05:36:29.224929 | orchestrator | 2026-04-07 05:36:29.224940 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-04-07 05:36:29.224951 | orchestrator | Tuesday 07 April 2026 05:36:28 +0000 (0:00:01.747) 0:02:06.870 ********* 2026-04-07 05:36:29.224961 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:36:29.224972 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:36:29.224983 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:36:29.224994 | orchestrator | 2026-04-07 05:36:29.225004 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-04-07 05:36:29.225015 | orchestrator | Tuesday 07 April 2026 05:36:28 +0000 (0:00:00.413) 0:02:07.284 ********* 2026-04-07 05:36:29.225026 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 05:36:29.225037 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 05:36:29.225048 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.225058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 05:36:29.225075 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 05:36:29.225086 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.225097 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 05:36:29.225108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 05:36:29.225119 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:29.225130 | orchestrator | 2026-04-07 05:36:29.225141 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-04-07 05:36:29.225151 | orchestrator | Tuesday 07 April 2026 05:36:28 +0000 (0:00:00.431) 0:02:07.715 ********* 2026-04-07 05:36:29.225164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:29.225178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:29.225189 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:29.225200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:29.225216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:29.225228 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:29.225238 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:29.225257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:32.854922 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:32.855025 | orchestrator | 2026-04-07 05:36:32.855042 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-04-07 05:36:32.855055 | orchestrator | Tuesday 07 April 2026 05:36:29 +0000 (0:00:00.684) 0:02:08.399 ********* 2026-04-07 05:36:32.855068 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:32.855079 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:32.855090 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:32.855102 | orchestrator | 2026-04-07 05:36:32.855113 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-04-07 05:36:32.855125 | orchestrator | Tuesday 07 April 2026 05:36:29 +0000 (0:00:00.358) 0:02:08.757 ********* 2026-04-07 05:36:32.855136 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:32.855146 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:32.855158 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:32.855168 | orchestrator | 2026-04-07 05:36:32.855179 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-04-07 05:36:32.855190 | orchestrator | Tuesday 07 April 2026 05:36:30 +0000 (0:00:00.370) 0:02:09.128 ********* 2026-04-07 05:36:32.855201 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:32.855212 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:32.855223 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:32.855234 | orchestrator | 2026-04-07 05:36:32.855245 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-04-07 05:36:32.855256 | orchestrator | Tuesday 07 April 2026 05:36:30 +0000 (0:00:00.370) 0:02:09.498 ********* 2026-04-07 05:36:32.855267 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:32.855278 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:32.855289 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:32.855299 | orchestrator | 2026-04-07 05:36:32.855310 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-04-07 05:36:32.855321 | orchestrator | Tuesday 07 April 2026 05:36:31 +0000 (0:00:00.361) 0:02:09.860 ********* 2026-04-07 05:36:32.855349 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}) 2026-04-07 05:36:32.855361 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}) 2026-04-07 05:36:32.855372 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}) 2026-04-07 05:36:32.855403 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}) 2026-04-07 05:36:32.855415 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}) 2026-04-07 05:36:32.855425 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}) 2026-04-07 05:36:32.855436 | orchestrator | 2026-04-07 05:36:32.855447 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-04-07 05:36:32.855493 | orchestrator | Tuesday 07 April 2026 05:36:32 +0000 (0:00:01.521) 0:02:11.381 ********* 2026-04-07 05:36:32.855513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-44abcd21-31e3-595d-ad07-7c010500a60a/osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1775529860.3392444, 'mtime': 1775529860.3372443, 'ctime': 1775529860.3372443, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-44abcd21-31e3-595d-ad07-7c010500a60a/osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:32.855551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a/osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1775529881.3695838, 'mtime': 1775529881.3645837, 'ctime': 1775529881.3645837, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a/osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:32.855567 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:32.855588 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ccafa0da-42f8-5022-b95e-1902d46c646f/osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 954, 'dev': 6, 'nlink': 1, 'atime': 1775529862.5629697, 'mtime': 1775529862.5579696, 'ctime': 1775529862.5579696, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ccafa0da-42f8-5022-b95e-1902d46c646f/osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:32.855611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8941099b-00de-50f1-81f7-f26159704c09/osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 964, 'dev': 6, 'nlink': 1, 'atime': 1775529881.0692647, 'mtime': 1775529881.0652645, 'ctime': 1775529881.0652645, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8941099b-00de-50f1-81f7-f26159704c09/osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:32.855625 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:32.855648 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-754aebfc-d76c-537f-941d-8ad36483cdb2/osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 949, 'dev': 6, 'nlink': 1, 'atime': 1775529862.2035186, 'mtime': 1775529862.1985185, 'ctime': 1775529862.1985185, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-754aebfc-d76c-537f-941d-8ad36483cdb2/osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.842764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d/osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 959, 'dev': 6, 'nlink': 1, 'atime': 1775529880.8148184, 'mtime': 1775529880.8108182, 'ctime': 1775529880.8108182, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d/osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.842902 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:34.842920 | orchestrator | 2026-04-07 05:36:34.842933 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-04-07 05:36:34.842946 | orchestrator | Tuesday 07 April 2026 05:36:32 +0000 (0:00:00.419) 0:02:11.801 ********* 2026-04-07 05:36:34.842958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 05:36:34.842971 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 05:36:34.842981 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:34.842993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 05:36:34.843004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 05:36:34.843014 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:34.843025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 05:36:34.843036 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 05:36:34.843046 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:34.843057 | orchestrator | 2026-04-07 05:36:34.843068 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-04-07 05:36:34.843080 | orchestrator | Tuesday 07 April 2026 05:36:33 +0000 (0:00:00.438) 0:02:12.239 ********* 2026-04-07 05:36:34.843092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843116 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:34.843127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843155 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843175 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:34.843186 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843215 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:34.843227 | orchestrator | 2026-04-07 05:36:34.843238 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-04-07 05:36:34.843250 | orchestrator | Tuesday 07 April 2026 05:36:33 +0000 (0:00:00.372) 0:02:12.612 ********* 2026-04-07 05:36:34.843260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'})  2026-04-07 05:36:34.843272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'})  2026-04-07 05:36:34.843286 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:34.843299 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'})  2026-04-07 05:36:34.843311 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'})  2026-04-07 05:36:34.843323 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:34.843336 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'})  2026-04-07 05:36:34.843349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'})  2026-04-07 05:36:34.843361 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:34.843374 | orchestrator | 2026-04-07 05:36:34.843387 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-04-07 05:36:34.843400 | orchestrator | Tuesday 07 April 2026 05:36:34 +0000 (0:00:00.743) 0:02:13.356 ********* 2026-04-07 05:36:34.843413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-44abcd21-31e3-595d-ad07-7c010500a60a', 'data_vg': 'ceph-44abcd21-31e3-595d-ad07-7c010500a60a'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843426 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-116f5715-f5f6-56e4-87eb-3f2be33e5f2a', 'data_vg': 'ceph-116f5715-f5f6-56e4-87eb-3f2be33e5f2a'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843439 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:34.843477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ccafa0da-42f8-5022-b95e-1902d46c646f', 'data_vg': 'ceph-ccafa0da-42f8-5022-b95e-1902d46c646f'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843497 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8941099b-00de-50f1-81f7-f26159704c09', 'data_vg': 'ceph-8941099b-00de-50f1-81f7-f26159704c09'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843509 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:34.843522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-754aebfc-d76c-537f-941d-8ad36483cdb2', 'data_vg': 'ceph-754aebfc-d76c-537f-941d-8ad36483cdb2'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:34.843542 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ed7b856a-23c6-522d-bad3-e57b6a18196d', 'data_vg': 'ceph-ed7b856a-23c6-522d-bad3-e57b6a18196d'}, 'ansible_loop_var': 'item'})  2026-04-07 05:36:39.552426 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:39.552607 | orchestrator | 2026-04-07 05:36:39.552670 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-04-07 05:36:39.552695 | orchestrator | Tuesday 07 April 2026 05:36:34 +0000 (0:00:00.415) 0:02:13.771 ********* 2026-04-07 05:36:39.552707 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:39.552718 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:39.552729 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:39.552740 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:39.552768 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:39.552779 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:39.552790 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:39.552801 | orchestrator | 2026-04-07 05:36:39.552813 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-04-07 05:36:39.552823 | orchestrator | Tuesday 07 April 2026 05:36:35 +0000 (0:00:00.799) 0:02:14.571 ********* 2026-04-07 05:36:39.552834 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:39.552845 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:39.552855 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:39.552866 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:39.552878 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-07 05:36:39.552889 | orchestrator | 2026-04-07 05:36:39.552900 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-04-07 05:36:39.552911 | orchestrator | Tuesday 07 April 2026 05:36:37 +0000 (0:00:01.803) 0:02:16.374 ********* 2026-04-07 05:36:39.552922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.552935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.552948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.552961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.552974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.552986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553073 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:39.553087 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:39.553100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553160 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:39.553171 | orchestrator | 2026-04-07 05:36:39.553182 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-04-07 05:36:39.553193 | orchestrator | Tuesday 07 April 2026 05:36:37 +0000 (0:00:00.436) 0:02:16.811 ********* 2026-04-07 05:36:39.553204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553278 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:39.553289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553350 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:39.553361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553422 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:39.553434 | orchestrator | 2026-04-07 05:36:39.553467 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-04-07 05:36:39.553478 | orchestrator | Tuesday 07 April 2026 05:36:38 +0000 (0:00:00.730) 0:02:17.541 ********* 2026-04-07 05:36:39.553490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553544 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:39.553555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553610 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:39.553621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-07 05:36:39.553675 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:39.553686 | orchestrator | 2026-04-07 05:36:39.553697 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-04-07 05:36:39.553708 | orchestrator | Tuesday 07 April 2026 05:36:39 +0000 (0:00:00.529) 0:02:18.071 ********* 2026-04-07 05:36:39.553719 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:39.553730 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:39.553748 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:47.159294 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:47.159401 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:47.159416 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:47.159470 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:47.159483 | orchestrator | 2026-04-07 05:36:47.159496 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-04-07 05:36:47.159533 | orchestrator | Tuesday 07 April 2026 05:36:40 +0000 (0:00:00.775) 0:02:18.847 ********* 2026-04-07 05:36:47.159544 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:47.159569 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:47.159581 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:47.159591 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:47.159602 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:47.159613 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:47.159623 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:47.159634 | orchestrator | 2026-04-07 05:36:47.159645 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-04-07 05:36:47.159656 | orchestrator | Tuesday 07 April 2026 05:36:41 +0000 (0:00:01.246) 0:02:20.093 ********* 2026-04-07 05:36:47.159667 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:47.159677 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:47.159688 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:47.159699 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:47.159709 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:47.159720 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:47.159731 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:47.159741 | orchestrator | 2026-04-07 05:36:47.159753 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-04-07 05:36:47.159765 | orchestrator | Tuesday 07 April 2026 05:36:42 +0000 (0:00:00.814) 0:02:20.907 ********* 2026-04-07 05:36:47.159775 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:47.159786 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:47.159797 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:47.159807 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:47.159818 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:47.159829 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:47.159842 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:47.159855 | orchestrator | 2026-04-07 05:36:47.159868 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-04-07 05:36:47.159881 | orchestrator | Tuesday 07 April 2026 05:36:43 +0000 (0:00:01.234) 0:02:22.142 ********* 2026-04-07 05:36:47.159896 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:47.159908 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:47.159920 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:47.159933 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:47.159946 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:47.159958 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:47.159969 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:47.159980 | orchestrator | 2026-04-07 05:36:47.159991 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-04-07 05:36:47.160002 | orchestrator | Tuesday 07 April 2026 05:36:44 +0000 (0:00:01.071) 0:02:23.213 ********* 2026-04-07 05:36:47.160013 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:47.160023 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:47.160034 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:47.160045 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:47.160056 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:47.160066 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:47.160077 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:47.160087 | orchestrator | 2026-04-07 05:36:47.160098 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-04-07 05:36:47.160109 | orchestrator | Tuesday 07 April 2026 05:36:45 +0000 (0:00:00.811) 0:02:24.025 ********* 2026-04-07 05:36:47.160121 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:47.160132 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:47.160142 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:47.160153 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:47.160164 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:47.160183 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:47.160194 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:47.160204 | orchestrator | 2026-04-07 05:36:47.160215 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-04-07 05:36:47.160226 | orchestrator | Tuesday 07 April 2026 05:36:46 +0000 (0:00:01.210) 0:02:25.236 ********* 2026-04-07 05:36:47.160238 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:47.160250 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:47.160263 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:47.160275 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:47.160287 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:47.160300 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:47.160311 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:47.160339 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:47.160351 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:47.160367 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:47.160379 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:47.160390 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:47.160401 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:47.160416 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:47.160457 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:47.160468 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:47.160479 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:47.160490 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:47.160509 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:47.160534 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:47.160545 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:47.160556 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:47.160567 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:47.160578 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:47.160588 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:47.160599 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:47.160610 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:47.160621 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:47.160632 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:47.160643 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:47.160661 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:49.259419 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:49.259568 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:49.259583 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:49.259592 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:49.259600 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:49.259610 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:49.259617 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:49.259623 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:49.259648 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:49.259654 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:49.259659 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:49.259666 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:49.259672 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:49.259678 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:49.259684 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:49.259689 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:49.259696 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:49.259701 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:49.259707 | orchestrator | 2026-04-07 05:36:49.259716 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-04-07 05:36:49.259724 | orchestrator | Tuesday 07 April 2026 05:36:47 +0000 (0:00:01.064) 0:02:26.300 ********* 2026-04-07 05:36:49.259731 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:49.259738 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:49.259744 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:49.259750 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:36:49.259756 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:36:49.259763 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:36:49.259770 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:36:49.259777 | orchestrator | 2026-04-07 05:36:49.259784 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-04-07 05:36:49.259791 | orchestrator | Tuesday 07 April 2026 05:36:48 +0000 (0:00:01.053) 0:02:27.353 ********* 2026-04-07 05:36:49.259797 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:49.259804 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:49.259810 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:49.259817 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:49.259838 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:49.259852 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:49.259859 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:36:49.259866 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:49.259879 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:49.259886 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:49.259892 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:49.259899 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:49.259907 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:49.259914 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:36:49.259920 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:49.259927 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:49.259934 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:49.259941 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:49.259948 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:49.259955 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:36:49.259962 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:36:49.259968 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:49.259975 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:49.259982 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:36:49.259989 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:36:49.259996 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:36:49.260002 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:36:49.260010 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:36:49.260025 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:37:05.756922 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:37:05.757024 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:37:05.757035 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:37:05.757042 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:37:05.757050 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:37:05.757057 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:37:05.757065 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:37:05.757071 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-07 05:37:05.757078 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-07 05:37:05.757084 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-07 05:37:05.757090 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:37:05.757096 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:37:05.757102 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:37:05.757108 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:37:05.757114 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:37:05.757120 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:37:05.757126 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-07 05:37:05.757132 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-07 05:37:05.757138 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-07 05:37:05.757144 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:37:05.757150 | orchestrator | 2026-04-07 05:37:05.757157 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-04-07 05:37:05.757179 | orchestrator | Tuesday 07 April 2026 05:36:49 +0000 (0:00:01.084) 0:02:28.438 ********* 2026-04-07 05:37:05.757186 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:05.757192 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:37:05.757197 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:37:05.757203 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:37:05.757209 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:37:05.757215 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:37:05.757221 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:37:05.757226 | orchestrator | 2026-04-07 05:37:05.757232 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-04-07 05:37:05.757238 | orchestrator | Tuesday 07 April 2026 05:36:50 +0000 (0:00:01.257) 0:02:29.695 ********* 2026-04-07 05:37:05.757244 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:05.757250 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:37:05.757256 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:37:05.757262 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:37:05.757268 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:37:05.757273 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:37:05.757279 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:37:05.757285 | orchestrator | 2026-04-07 05:37:05.757291 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-04-07 05:37:05.757308 | orchestrator | Tuesday 07 April 2026 05:36:51 +0000 (0:00:00.839) 0:02:30.534 ********* 2026-04-07 05:37:05.757314 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:05.757320 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:37:05.757326 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:37:05.757335 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:37:05.757344 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:37:05.757354 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:37:05.757364 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:37:05.757373 | orchestrator | 2026-04-07 05:37:05.757383 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-07 05:37:05.757393 | orchestrator | Tuesday 07 April 2026 05:36:53 +0000 (0:00:02.086) 0:02:32.621 ********* 2026-04-07 05:37:05.757444 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-07 05:37:05.757455 | orchestrator | 2026-04-07 05:37:05.757465 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-04-07 05:37:05.757475 | orchestrator | Tuesday 07 April 2026 05:36:55 +0000 (0:00:02.059) 0:02:34.680 ********* 2026-04-07 05:37:05.757485 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 05:37:05.757498 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 05:37:05.757510 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 05:37:05.757521 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 05:37:05.757529 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 05:37:05.757536 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 05:37:05.757545 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-07 05:37:05.757556 | orchestrator | 2026-04-07 05:37:05.757565 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-04-07 05:37:05.757575 | orchestrator | Tuesday 07 April 2026 05:36:56 +0000 (0:00:01.014) 0:02:35.695 ********* 2026-04-07 05:37:05.757585 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:05.757596 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:37:05.757607 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:37:05.757625 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:37:05.757635 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:37:05.757646 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:37:05.757655 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:37:05.757665 | orchestrator | 2026-04-07 05:37:05.757675 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-04-07 05:37:05.757684 | orchestrator | Tuesday 07 April 2026 05:36:58 +0000 (0:00:01.247) 0:02:36.942 ********* 2026-04-07 05:37:05.757694 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:05.757703 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:37:05.757713 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:37:05.757721 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:37:05.757727 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:37:05.757735 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:37:05.757742 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:37:05.757750 | orchestrator | 2026-04-07 05:37:05.757760 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-04-07 05:37:05.757770 | orchestrator | Tuesday 07 April 2026 05:36:58 +0000 (0:00:00.838) 0:02:37.781 ********* 2026-04-07 05:37:05.757779 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:05.757789 | orchestrator | ok: [testbed-node-1] 2026-04-07 05:37:05.757798 | orchestrator | ok: [testbed-node-2] 2026-04-07 05:37:05.757808 | orchestrator | ok: [testbed-node-3] 2026-04-07 05:37:05.757817 | orchestrator | ok: [testbed-node-4] 2026-04-07 05:37:05.757828 | orchestrator | ok: [testbed-node-5] 2026-04-07 05:37:05.757838 | orchestrator | ok: [testbed-manager] 2026-04-07 05:37:05.757847 | orchestrator | 2026-04-07 05:37:05.757857 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-04-07 05:37:05.757867 | orchestrator | Tuesday 07 April 2026 05:37:00 +0000 (0:00:01.511) 0:02:39.293 ********* 2026-04-07 05:37:05.757877 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:05.757886 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:37:05.757891 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:37:05.757897 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:37:05.757903 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:37:05.757909 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:37:05.757914 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:37:05.757920 | orchestrator | 2026-04-07 05:37:05.757926 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-07 05:37:05.757932 | orchestrator | Tuesday 07 April 2026 05:37:02 +0000 (0:00:01.614) 0:02:40.907 ********* 2026-04-07 05:37:05.757938 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:05.757944 | orchestrator | skipping: [testbed-node-1] 2026-04-07 05:37:05.757949 | orchestrator | skipping: [testbed-node-2] 2026-04-07 05:37:05.757955 | orchestrator | skipping: [testbed-node-3] 2026-04-07 05:37:05.757961 | orchestrator | skipping: [testbed-node-4] 2026-04-07 05:37:05.757967 | orchestrator | skipping: [testbed-node-5] 2026-04-07 05:37:05.757973 | orchestrator | skipping: [testbed-manager] 2026-04-07 05:37:05.757978 | orchestrator | 2026-04-07 05:37:05.757984 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-04-07 05:37:05.757990 | orchestrator | Tuesday 07 April 2026 05:37:03 +0000 (0:00:01.760) 0:02:42.668 ********* 2026-04-07 05:37:05.757996 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:05.758002 | orchestrator | 2026-04-07 05:37:05.758008 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-04-07 05:37:05.758058 | orchestrator | Tuesday 07 April 2026 05:37:05 +0000 (0:00:01.729) 0:02:44.398 ********* 2026-04-07 05:37:05.758066 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:05.758072 | orchestrator | 2026-04-07 05:37:05.758088 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-04-07 05:37:25.093679 | orchestrator | 2026-04-07 05:37:25.093790 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 05:37:25.093807 | orchestrator | Tuesday 07 April 2026 05:37:06 +0000 (0:00:00.725) 0:02:45.123 ********* 2026-04-07 05:37:25.093855 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.093868 | orchestrator | 2026-04-07 05:37:25.093879 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 05:37:25.093890 | orchestrator | Tuesday 07 April 2026 05:37:06 +0000 (0:00:00.537) 0:02:45.661 ********* 2026-04-07 05:37:25.093901 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.093912 | orchestrator | 2026-04-07 05:37:25.093923 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-04-07 05:37:25.093934 | orchestrator | Tuesday 07 April 2026 05:37:07 +0000 (0:00:00.519) 0:02:46.181 ********* 2026-04-07 05:37:25.093947 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-07 05:37:25.093961 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-07 05:37:25.093972 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-07 05:37:25.093984 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-07 05:37:25.093997 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-07 05:37:25.094009 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}])  2026-04-07 05:37:25.094082 | orchestrator | 2026-04-07 05:37:25.094094 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-07 05:37:25.094105 | orchestrator | 2026-04-07 05:37:25.094116 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-07 05:37:25.094127 | orchestrator | Tuesday 07 April 2026 05:37:17 +0000 (0:00:10.021) 0:02:56.203 ********* 2026-04-07 05:37:25.094138 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094149 | orchestrator | 2026-04-07 05:37:25.094159 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-07 05:37:25.094170 | orchestrator | Tuesday 07 April 2026 05:37:17 +0000 (0:00:00.504) 0:02:56.708 ********* 2026-04-07 05:37:25.094214 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094230 | orchestrator | 2026-04-07 05:37:25.094243 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-07 05:37:25.094257 | orchestrator | Tuesday 07 April 2026 05:37:18 +0000 (0:00:00.153) 0:02:56.862 ********* 2026-04-07 05:37:25.094277 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:25.094291 | orchestrator | 2026-04-07 05:37:25.094304 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-07 05:37:25.094317 | orchestrator | Tuesday 07 April 2026 05:37:18 +0000 (0:00:00.142) 0:02:57.005 ********* 2026-04-07 05:37:25.094330 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094341 | orchestrator | 2026-04-07 05:37:25.094352 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-07 05:37:25.094363 | orchestrator | Tuesday 07 April 2026 05:37:18 +0000 (0:00:00.158) 0:02:57.163 ********* 2026-04-07 05:37:25.094394 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-07 05:37:25.094406 | orchestrator | 2026-04-07 05:37:25.094417 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-07 05:37:25.094446 | orchestrator | Tuesday 07 April 2026 05:37:18 +0000 (0:00:00.236) 0:02:57.400 ********* 2026-04-07 05:37:25.094457 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094468 | orchestrator | 2026-04-07 05:37:25.094479 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-07 05:37:25.094497 | orchestrator | Tuesday 07 April 2026 05:37:18 +0000 (0:00:00.432) 0:02:57.832 ********* 2026-04-07 05:37:25.094508 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094519 | orchestrator | 2026-04-07 05:37:25.094529 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-07 05:37:25.094540 | orchestrator | Tuesday 07 April 2026 05:37:19 +0000 (0:00:00.132) 0:02:57.965 ********* 2026-04-07 05:37:25.094551 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094561 | orchestrator | 2026-04-07 05:37:25.094573 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-07 05:37:25.094583 | orchestrator | Tuesday 07 April 2026 05:37:19 +0000 (0:00:00.459) 0:02:58.425 ********* 2026-04-07 05:37:25.094594 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094604 | orchestrator | 2026-04-07 05:37:25.094615 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-07 05:37:25.094626 | orchestrator | Tuesday 07 April 2026 05:37:20 +0000 (0:00:00.497) 0:02:58.923 ********* 2026-04-07 05:37:25.094637 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094647 | orchestrator | 2026-04-07 05:37:25.094658 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-07 05:37:25.094669 | orchestrator | Tuesday 07 April 2026 05:37:20 +0000 (0:00:00.158) 0:02:59.081 ********* 2026-04-07 05:37:25.094680 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094690 | orchestrator | 2026-04-07 05:37:25.094701 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-07 05:37:25.094712 | orchestrator | Tuesday 07 April 2026 05:37:20 +0000 (0:00:00.188) 0:02:59.270 ********* 2026-04-07 05:37:25.094723 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:25.094734 | orchestrator | 2026-04-07 05:37:25.094745 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-07 05:37:25.094756 | orchestrator | Tuesday 07 April 2026 05:37:20 +0000 (0:00:00.149) 0:02:59.420 ********* 2026-04-07 05:37:25.094766 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094777 | orchestrator | 2026-04-07 05:37:25.094788 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-07 05:37:25.094799 | orchestrator | Tuesday 07 April 2026 05:37:20 +0000 (0:00:00.140) 0:02:59.560 ********* 2026-04-07 05:37:25.094809 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:37:25.094820 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:37:25.094831 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:37:25.094842 | orchestrator | 2026-04-07 05:37:25.094853 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-07 05:37:25.094864 | orchestrator | Tuesday 07 April 2026 05:37:21 +0000 (0:00:00.727) 0:03:00.288 ********* 2026-04-07 05:37:25.094882 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:25.094893 | orchestrator | 2026-04-07 05:37:25.094903 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-07 05:37:25.094914 | orchestrator | Tuesday 07 April 2026 05:37:21 +0000 (0:00:00.280) 0:03:00.569 ********* 2026-04-07 05:37:25.094925 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:37:25.094936 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:37:25.094947 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:37:25.094958 | orchestrator | 2026-04-07 05:37:25.094968 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-07 05:37:25.094979 | orchestrator | Tuesday 07 April 2026 05:37:23 +0000 (0:00:01.864) 0:03:02.433 ********* 2026-04-07 05:37:25.094990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 05:37:25.095001 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 05:37:25.095012 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 05:37:25.095022 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:25.095033 | orchestrator | 2026-04-07 05:37:25.095044 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-07 05:37:25.095055 | orchestrator | Tuesday 07 April 2026 05:37:24 +0000 (0:00:00.468) 0:03:02.902 ********* 2026-04-07 05:37:25.095068 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-07 05:37:25.095081 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-07 05:37:25.095092 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-07 05:37:25.095103 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:25.095114 | orchestrator | 2026-04-07 05:37:25.095125 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-07 05:37:25.095136 | orchestrator | Tuesday 07 April 2026 05:37:25 +0000 (0:00:00.957) 0:03:03.859 ********* 2026-04-07 05:37:25.095161 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:29.515472 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:29.515579 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:29.515595 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.515608 | orchestrator | 2026-04-07 05:37:29.515645 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-07 05:37:29.515659 | orchestrator | Tuesday 07 April 2026 05:37:25 +0000 (0:00:00.164) 0:03:04.024 ********* 2026-04-07 05:37:29.515672 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5d7151ccbc56', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-07 05:37:22.293362', 'end': '2026-04-07 05:37:22.339661', 'delta': '0:00:00.046299', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5d7151ccbc56'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-07 05:37:29.515687 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e8d9f46c7c23', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-07 05:37:22.839055', 'end': '2026-04-07 05:37:22.888286', 'delta': '0:00:00.049231', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8d9f46c7c23'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-07 05:37:29.515699 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f4f6ca89ad43', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-07 05:37:23.379520', 'end': '2026-04-07 05:37:23.424688', 'delta': '0:00:00.045168', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f4f6ca89ad43'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-07 05:37:29.515710 | orchestrator | 2026-04-07 05:37:29.515722 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-07 05:37:29.515733 | orchestrator | Tuesday 07 April 2026 05:37:25 +0000 (0:00:00.191) 0:03:04.216 ********* 2026-04-07 05:37:29.515744 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:29.515756 | orchestrator | 2026-04-07 05:37:29.515766 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-07 05:37:29.515777 | orchestrator | Tuesday 07 April 2026 05:37:25 +0000 (0:00:00.262) 0:03:04.479 ********* 2026-04-07 05:37:29.515788 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.515799 | orchestrator | 2026-04-07 05:37:29.515810 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-07 05:37:29.515821 | orchestrator | Tuesday 07 April 2026 05:37:26 +0000 (0:00:00.900) 0:03:05.379 ********* 2026-04-07 05:37:29.515831 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:29.515842 | orchestrator | 2026-04-07 05:37:29.515853 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-07 05:37:29.515877 | orchestrator | Tuesday 07 April 2026 05:37:26 +0000 (0:00:00.143) 0:03:05.523 ********* 2026-04-07 05:37:29.515906 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-07 05:37:29.515918 | orchestrator | 2026-04-07 05:37:29.515929 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 05:37:29.515940 | orchestrator | Tuesday 07 April 2026 05:37:27 +0000 (0:00:01.125) 0:03:06.649 ********* 2026-04-07 05:37:29.515969 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:29.515982 | orchestrator | 2026-04-07 05:37:29.515995 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-07 05:37:29.516008 | orchestrator | Tuesday 07 April 2026 05:37:27 +0000 (0:00:00.153) 0:03:06.802 ********* 2026-04-07 05:37:29.516020 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516032 | orchestrator | 2026-04-07 05:37:29.516045 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-07 05:37:29.516058 | orchestrator | Tuesday 07 April 2026 05:37:28 +0000 (0:00:00.136) 0:03:06.939 ********* 2026-04-07 05:37:29.516071 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516083 | orchestrator | 2026-04-07 05:37:29.516095 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-07 05:37:29.516107 | orchestrator | Tuesday 07 April 2026 05:37:28 +0000 (0:00:00.244) 0:03:07.183 ********* 2026-04-07 05:37:29.516120 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516132 | orchestrator | 2026-04-07 05:37:29.516144 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-07 05:37:29.516157 | orchestrator | Tuesday 07 April 2026 05:37:28 +0000 (0:00:00.168) 0:03:07.352 ********* 2026-04-07 05:37:29.516169 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516182 | orchestrator | 2026-04-07 05:37:29.516194 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-07 05:37:29.516207 | orchestrator | Tuesday 07 April 2026 05:37:28 +0000 (0:00:00.144) 0:03:07.496 ********* 2026-04-07 05:37:29.516219 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516231 | orchestrator | 2026-04-07 05:37:29.516244 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-07 05:37:29.516256 | orchestrator | Tuesday 07 April 2026 05:37:28 +0000 (0:00:00.168) 0:03:07.664 ********* 2026-04-07 05:37:29.516268 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516281 | orchestrator | 2026-04-07 05:37:29.516294 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-07 05:37:29.516306 | orchestrator | Tuesday 07 April 2026 05:37:28 +0000 (0:00:00.140) 0:03:07.805 ********* 2026-04-07 05:37:29.516317 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516328 | orchestrator | 2026-04-07 05:37:29.516339 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-07 05:37:29.516350 | orchestrator | Tuesday 07 April 2026 05:37:29 +0000 (0:00:00.139) 0:03:07.944 ********* 2026-04-07 05:37:29.516361 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516443 | orchestrator | 2026-04-07 05:37:29.516456 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-07 05:37:29.516468 | orchestrator | Tuesday 07 April 2026 05:37:29 +0000 (0:00:00.154) 0:03:08.098 ********* 2026-04-07 05:37:29.516479 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:29.516490 | orchestrator | 2026-04-07 05:37:29.516501 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-07 05:37:29.516511 | orchestrator | Tuesday 07 April 2026 05:37:29 +0000 (0:00:00.140) 0:03:08.239 ********* 2026-04-07 05:37:29.516523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:37:29.516535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:37:29.516555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:37:29.516574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-07 05:37:29.516597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:37:30.101937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:37:30.102896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:37:30.102940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cddfb89c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-07 05:37:30.102984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:37:30.102997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-07 05:37:30.103009 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:30.103023 | orchestrator | 2026-04-07 05:37:30.103035 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-07 05:37:30.103047 | orchestrator | Tuesday 07 April 2026 05:37:29 +0000 (0:00:00.580) 0:03:08.819 ********* 2026-04-07 05:37:30.103083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:30.103098 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:30.103110 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:30.103122 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-07-01-23-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:30.103239 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:30.103268 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:30.103294 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:39.937077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cddfb89c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1', 'scsi-SQEMU_QEMU_HARDDISK_cddfb89c-0910-445c-9577-7506a4630395-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:39.937235 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:39.937279 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-07 05:37:39.937296 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:39.937314 | orchestrator | 2026-04-07 05:37:39.937331 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-07 05:37:39.937348 | orchestrator | Tuesday 07 April 2026 05:37:30 +0000 (0:00:00.280) 0:03:09.100 ********* 2026-04-07 05:37:39.937435 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:39.937452 | orchestrator | 2026-04-07 05:37:39.937468 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-07 05:37:39.937483 | orchestrator | Tuesday 07 April 2026 05:37:30 +0000 (0:00:00.504) 0:03:09.604 ********* 2026-04-07 05:37:39.937498 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:39.937513 | orchestrator | 2026-04-07 05:37:39.937528 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 05:37:39.937573 | orchestrator | Tuesday 07 April 2026 05:37:30 +0000 (0:00:00.148) 0:03:09.753 ********* 2026-04-07 05:37:39.937595 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:37:39.937614 | orchestrator | 2026-04-07 05:37:39.937635 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 05:37:39.937654 | orchestrator | Tuesday 07 April 2026 05:37:31 +0000 (0:00:00.499) 0:03:10.252 ********* 2026-04-07 05:37:39.937674 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:39.937740 | orchestrator | 2026-04-07 05:37:39.937760 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-07 05:37:39.937780 | orchestrator | Tuesday 07 April 2026 05:37:31 +0000 (0:00:00.141) 0:03:10.394 ********* 2026-04-07 05:37:39.937799 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:39.937819 | orchestrator | 2026-04-07 05:37:39.937839 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-07 05:37:39.937859 | orchestrator | Tuesday 07 April 2026 05:37:31 +0000 (0:00:00.262) 0:03:10.656 ********* 2026-04-07 05:37:39.937877 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:39.937896 | orchestrator | 2026-04-07 05:37:39.937915 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 05:37:39.937933 | orchestrator | Tuesday 07 April 2026 05:37:31 +0000 (0:00:00.157) 0:03:10.814 ********* 2026-04-07 05:37:39.937952 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:37:39.937985 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 05:37:39.938003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 05:37:39.938097 | orchestrator | 2026-04-07 05:37:39.938121 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 05:37:39.938139 | orchestrator | Tuesday 07 April 2026 05:37:33 +0000 (0:00:01.059) 0:03:11.873 ********* 2026-04-07 05:37:39.938159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 05:37:39.938178 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 05:37:39.938195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 05:37:39.938214 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:39.938232 | orchestrator | 2026-04-07 05:37:39.938249 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-07 05:37:39.938269 | orchestrator | Tuesday 07 April 2026 05:37:33 +0000 (0:00:00.174) 0:03:12.048 ********* 2026-04-07 05:37:39.938288 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:39.938307 | orchestrator | 2026-04-07 05:37:39.938325 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-07 05:37:39.938342 | orchestrator | Tuesday 07 April 2026 05:37:33 +0000 (0:00:00.154) 0:03:12.202 ********* 2026-04-07 05:37:39.938391 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:37:39.938410 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:37:39.938430 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:37:39.938446 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 05:37:39.938464 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 05:37:39.938483 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 05:37:39.938501 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 05:37:39.938519 | orchestrator | 2026-04-07 05:37:39.938537 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-07 05:37:39.938556 | orchestrator | Tuesday 07 April 2026 05:37:34 +0000 (0:00:01.277) 0:03:13.479 ********* 2026-04-07 05:37:39.938575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:37:39.938592 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:37:39.938610 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:37:39.938629 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-07 05:37:39.938647 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-07 05:37:39.938666 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-07 05:37:39.938684 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-07 05:37:39.938702 | orchestrator | 2026-04-07 05:37:39.938721 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-07 05:37:39.938752 | orchestrator | Tuesday 07 April 2026 05:37:36 +0000 (0:00:01.730) 0:03:15.209 ********* 2026-04-07 05:37:39.938772 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-07 05:37:39.938789 | orchestrator | 2026-04-07 05:37:39.938807 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-07 05:37:39.938826 | orchestrator | Tuesday 07 April 2026 05:37:38 +0000 (0:00:01.987) 0:03:17.197 ********* 2026-04-07 05:37:39.938845 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:39.938862 | orchestrator | 2026-04-07 05:37:39.938879 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-07 05:37:39.938897 | orchestrator | Tuesday 07 April 2026 05:37:38 +0000 (0:00:00.229) 0:03:17.427 ********* 2026-04-07 05:37:39.938930 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:37:39.938950 | orchestrator | 2026-04-07 05:37:39.938968 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-07 05:37:39.938986 | orchestrator | Tuesday 07 April 2026 05:37:38 +0000 (0:00:00.130) 0:03:17.557 ********* 2026-04-07 05:37:39.939004 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-07 05:37:39.939022 | orchestrator | 2026-04-07 05:37:39.939040 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-07 05:37:39.939075 | orchestrator | Tuesday 07 April 2026 05:37:39 +0000 (0:00:01.215) 0:03:18.772 ********* 2026-04-07 05:38:05.992439 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.992550 | orchestrator | 2026-04-07 05:38:05.992567 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-07 05:38:05.992581 | orchestrator | Tuesday 07 April 2026 05:37:40 +0000 (0:00:00.149) 0:03:18.922 ********* 2026-04-07 05:38:05.992593 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:38:05.992604 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-07 05:38:05.992615 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-07 05:38:05.992626 | orchestrator | 2026-04-07 05:38:05.992637 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-07 05:38:05.992648 | orchestrator | Tuesday 07 April 2026 05:37:41 +0000 (0:00:01.455) 0:03:20.378 ********* 2026-04-07 05:38:05.992659 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-04-07 05:38:05.992670 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-04-07 05:38:05.992682 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-04-07 05:38:05.992693 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-04-07 05:38:05.992703 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-04-07 05:38:05.992714 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-04-07 05:38:05.992725 | orchestrator | 2026-04-07 05:38:05.992736 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-07 05:38:05.992747 | orchestrator | Tuesday 07 April 2026 05:37:53 +0000 (0:00:11.947) 0:03:32.326 ********* 2026-04-07 05:38:05.992758 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:38:05.992769 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:38:05.992780 | orchestrator | 2026-04-07 05:38:05.992791 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-07 05:38:05.992802 | orchestrator | Tuesday 07 April 2026 05:37:56 +0000 (0:00:02.998) 0:03:35.324 ********* 2026-04-07 05:38:05.992812 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:38:05.992823 | orchestrator | 2026-04-07 05:38:05.992834 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-07 05:38:05.992845 | orchestrator | Tuesday 07 April 2026 05:37:58 +0000 (0:00:01.569) 0:03:36.894 ********* 2026-04-07 05:38:05.992856 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-07 05:38:05.992867 | orchestrator | 2026-04-07 05:38:05.992877 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-07 05:38:05.992888 | orchestrator | Tuesday 07 April 2026 05:37:58 +0000 (0:00:00.575) 0:03:37.470 ********* 2026-04-07 05:38:05.992899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-07 05:38:05.992910 | orchestrator | 2026-04-07 05:38:05.992921 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-07 05:38:05.992934 | orchestrator | Tuesday 07 April 2026 05:37:59 +0000 (0:00:00.573) 0:03:38.043 ********* 2026-04-07 05:38:05.992974 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:05.992987 | orchestrator | 2026-04-07 05:38:05.993000 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-07 05:38:05.993013 | orchestrator | Tuesday 07 April 2026 05:38:00 +0000 (0:00:00.840) 0:03:38.883 ********* 2026-04-07 05:38:05.993027 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993039 | orchestrator | 2026-04-07 05:38:05.993053 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-07 05:38:05.993066 | orchestrator | Tuesday 07 April 2026 05:38:00 +0000 (0:00:00.130) 0:03:39.014 ********* 2026-04-07 05:38:05.993079 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993093 | orchestrator | 2026-04-07 05:38:05.993105 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-07 05:38:05.993118 | orchestrator | Tuesday 07 April 2026 05:38:00 +0000 (0:00:00.149) 0:03:39.163 ********* 2026-04-07 05:38:05.993132 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993144 | orchestrator | 2026-04-07 05:38:05.993158 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-07 05:38:05.993170 | orchestrator | Tuesday 07 April 2026 05:38:00 +0000 (0:00:00.141) 0:03:39.305 ********* 2026-04-07 05:38:05.993196 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:05.993210 | orchestrator | 2026-04-07 05:38:05.993223 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-07 05:38:05.993237 | orchestrator | Tuesday 07 April 2026 05:38:01 +0000 (0:00:00.550) 0:03:39.855 ********* 2026-04-07 05:38:05.993250 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993263 | orchestrator | 2026-04-07 05:38:05.993276 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-07 05:38:05.993288 | orchestrator | Tuesday 07 April 2026 05:38:01 +0000 (0:00:00.129) 0:03:39.985 ********* 2026-04-07 05:38:05.993299 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993309 | orchestrator | 2026-04-07 05:38:05.993340 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-07 05:38:05.993352 | orchestrator | Tuesday 07 April 2026 05:38:01 +0000 (0:00:00.140) 0:03:40.125 ********* 2026-04-07 05:38:05.993363 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:05.993374 | orchestrator | 2026-04-07 05:38:05.993384 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-07 05:38:05.993395 | orchestrator | Tuesday 07 April 2026 05:38:01 +0000 (0:00:00.567) 0:03:40.693 ********* 2026-04-07 05:38:05.993405 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:05.993416 | orchestrator | 2026-04-07 05:38:05.993444 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-07 05:38:05.993478 | orchestrator | Tuesday 07 April 2026 05:38:02 +0000 (0:00:00.551) 0:03:41.244 ********* 2026-04-07 05:38:05.993490 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993501 | orchestrator | 2026-04-07 05:38:05.993512 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-07 05:38:05.993523 | orchestrator | Tuesday 07 April 2026 05:38:02 +0000 (0:00:00.137) 0:03:41.382 ********* 2026-04-07 05:38:05.993534 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:05.993545 | orchestrator | 2026-04-07 05:38:05.993556 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-07 05:38:05.993567 | orchestrator | Tuesday 07 April 2026 05:38:02 +0000 (0:00:00.151) 0:03:41.533 ********* 2026-04-07 05:38:05.993578 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993589 | orchestrator | 2026-04-07 05:38:05.993601 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-07 05:38:05.993612 | orchestrator | Tuesday 07 April 2026 05:38:02 +0000 (0:00:00.121) 0:03:41.654 ********* 2026-04-07 05:38:05.993623 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993634 | orchestrator | 2026-04-07 05:38:05.993688 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-07 05:38:05.993700 | orchestrator | Tuesday 07 April 2026 05:38:02 +0000 (0:00:00.132) 0:03:41.787 ********* 2026-04-07 05:38:05.993720 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993731 | orchestrator | 2026-04-07 05:38:05.993742 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-07 05:38:05.993753 | orchestrator | Tuesday 07 April 2026 05:38:03 +0000 (0:00:00.449) 0:03:42.236 ********* 2026-04-07 05:38:05.993764 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993775 | orchestrator | 2026-04-07 05:38:05.993787 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-07 05:38:05.993798 | orchestrator | Tuesday 07 April 2026 05:38:03 +0000 (0:00:00.146) 0:03:42.382 ********* 2026-04-07 05:38:05.993809 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993820 | orchestrator | 2026-04-07 05:38:05.993832 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-07 05:38:05.993843 | orchestrator | Tuesday 07 April 2026 05:38:03 +0000 (0:00:00.138) 0:03:42.521 ********* 2026-04-07 05:38:05.993854 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:05.993865 | orchestrator | 2026-04-07 05:38:05.993876 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-07 05:38:05.993887 | orchestrator | Tuesday 07 April 2026 05:38:03 +0000 (0:00:00.145) 0:03:42.666 ********* 2026-04-07 05:38:05.993898 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:05.993909 | orchestrator | 2026-04-07 05:38:05.993921 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-07 05:38:05.993932 | orchestrator | Tuesday 07 April 2026 05:38:03 +0000 (0:00:00.162) 0:03:42.829 ********* 2026-04-07 05:38:05.993943 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:05.993954 | orchestrator | 2026-04-07 05:38:05.993965 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-07 05:38:05.993976 | orchestrator | Tuesday 07 April 2026 05:38:04 +0000 (0:00:00.250) 0:03:43.079 ********* 2026-04-07 05:38:05.993987 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.993998 | orchestrator | 2026-04-07 05:38:05.994009 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-07 05:38:05.994083 | orchestrator | Tuesday 07 April 2026 05:38:04 +0000 (0:00:00.149) 0:03:43.229 ********* 2026-04-07 05:38:05.994095 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994106 | orchestrator | 2026-04-07 05:38:05.994117 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-07 05:38:05.994128 | orchestrator | Tuesday 07 April 2026 05:38:04 +0000 (0:00:00.119) 0:03:43.349 ********* 2026-04-07 05:38:05.994139 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994150 | orchestrator | 2026-04-07 05:38:05.994161 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-07 05:38:05.994171 | orchestrator | Tuesday 07 April 2026 05:38:04 +0000 (0:00:00.143) 0:03:43.492 ********* 2026-04-07 05:38:05.994182 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994193 | orchestrator | 2026-04-07 05:38:05.994204 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-07 05:38:05.994215 | orchestrator | Tuesday 07 April 2026 05:38:04 +0000 (0:00:00.139) 0:03:43.632 ********* 2026-04-07 05:38:05.994226 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994236 | orchestrator | 2026-04-07 05:38:05.994247 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-07 05:38:05.994258 | orchestrator | Tuesday 07 April 2026 05:38:04 +0000 (0:00:00.154) 0:03:43.787 ********* 2026-04-07 05:38:05.994269 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994280 | orchestrator | 2026-04-07 05:38:05.994291 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-07 05:38:05.994308 | orchestrator | Tuesday 07 April 2026 05:38:05 +0000 (0:00:00.144) 0:03:43.931 ********* 2026-04-07 05:38:05.994338 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994350 | orchestrator | 2026-04-07 05:38:05.994362 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-07 05:38:05.994373 | orchestrator | Tuesday 07 April 2026 05:38:05 +0000 (0:00:00.490) 0:03:44.422 ********* 2026-04-07 05:38:05.994389 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994400 | orchestrator | 2026-04-07 05:38:05.994411 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-07 05:38:05.994422 | orchestrator | Tuesday 07 April 2026 05:38:05 +0000 (0:00:00.130) 0:03:44.552 ********* 2026-04-07 05:38:05.994433 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994444 | orchestrator | 2026-04-07 05:38:05.994455 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-07 05:38:05.994465 | orchestrator | Tuesday 07 April 2026 05:38:05 +0000 (0:00:00.132) 0:03:44.684 ********* 2026-04-07 05:38:05.994476 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:05.994487 | orchestrator | 2026-04-07 05:38:05.994498 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-07 05:38:05.994509 | orchestrator | Tuesday 07 April 2026 05:38:05 +0000 (0:00:00.134) 0:03:44.819 ********* 2026-04-07 05:38:24.675863 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676005 | orchestrator | 2026-04-07 05:38:24.676025 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-07 05:38:24.676039 | orchestrator | Tuesday 07 April 2026 05:38:06 +0000 (0:00:00.183) 0:03:45.002 ********* 2026-04-07 05:38:24.676050 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676061 | orchestrator | 2026-04-07 05:38:24.676073 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-07 05:38:24.676084 | orchestrator | Tuesday 07 April 2026 05:38:06 +0000 (0:00:00.200) 0:03:45.202 ********* 2026-04-07 05:38:24.676095 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:24.676107 | orchestrator | 2026-04-07 05:38:24.676118 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-07 05:38:24.676129 | orchestrator | Tuesday 07 April 2026 05:38:07 +0000 (0:00:00.943) 0:03:46.146 ********* 2026-04-07 05:38:24.676141 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:24.676164 | orchestrator | 2026-04-07 05:38:24.676176 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-07 05:38:24.676187 | orchestrator | Tuesday 07 April 2026 05:38:08 +0000 (0:00:01.497) 0:03:47.644 ********* 2026-04-07 05:38:24.676198 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-07 05:38:24.676211 | orchestrator | 2026-04-07 05:38:24.676223 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-07 05:38:24.676234 | orchestrator | Tuesday 07 April 2026 05:38:09 +0000 (0:00:00.570) 0:03:48.215 ********* 2026-04-07 05:38:24.676245 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676256 | orchestrator | 2026-04-07 05:38:24.676268 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-07 05:38:24.676279 | orchestrator | Tuesday 07 April 2026 05:38:09 +0000 (0:00:00.122) 0:03:48.338 ********* 2026-04-07 05:38:24.676290 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676361 | orchestrator | 2026-04-07 05:38:24.676374 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-07 05:38:24.676388 | orchestrator | Tuesday 07 April 2026 05:38:09 +0000 (0:00:00.150) 0:03:48.488 ********* 2026-04-07 05:38:24.676402 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-07 05:38:24.676415 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-07 05:38:24.676429 | orchestrator | 2026-04-07 05:38:24.676442 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-07 05:38:24.676456 | orchestrator | Tuesday 07 April 2026 05:38:10 +0000 (0:00:01.190) 0:03:49.679 ********* 2026-04-07 05:38:24.676469 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:24.676483 | orchestrator | 2026-04-07 05:38:24.676495 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-07 05:38:24.676506 | orchestrator | Tuesday 07 April 2026 05:38:11 +0000 (0:00:00.664) 0:03:50.344 ********* 2026-04-07 05:38:24.676539 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676550 | orchestrator | 2026-04-07 05:38:24.676562 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-07 05:38:24.676573 | orchestrator | Tuesday 07 April 2026 05:38:11 +0000 (0:00:00.183) 0:03:50.528 ********* 2026-04-07 05:38:24.676584 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676595 | orchestrator | 2026-04-07 05:38:24.676606 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-07 05:38:24.676617 | orchestrator | Tuesday 07 April 2026 05:38:11 +0000 (0:00:00.131) 0:03:50.660 ********* 2026-04-07 05:38:24.676627 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676638 | orchestrator | 2026-04-07 05:38:24.676650 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-07 05:38:24.676660 | orchestrator | Tuesday 07 April 2026 05:38:11 +0000 (0:00:00.144) 0:03:50.805 ********* 2026-04-07 05:38:24.676672 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-07 05:38:24.676682 | orchestrator | 2026-04-07 05:38:24.676693 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-07 05:38:24.676704 | orchestrator | Tuesday 07 April 2026 05:38:12 +0000 (0:00:00.603) 0:03:51.408 ********* 2026-04-07 05:38:24.676715 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:24.676726 | orchestrator | 2026-04-07 05:38:24.676737 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-07 05:38:24.676748 | orchestrator | Tuesday 07 April 2026 05:38:13 +0000 (0:00:00.717) 0:03:52.126 ********* 2026-04-07 05:38:24.676759 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-07 05:38:24.676785 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-07 05:38:24.676796 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-07 05:38:24.676807 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676818 | orchestrator | 2026-04-07 05:38:24.676829 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-07 05:38:24.676840 | orchestrator | Tuesday 07 April 2026 05:38:13 +0000 (0:00:00.169) 0:03:52.296 ********* 2026-04-07 05:38:24.676851 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676862 | orchestrator | 2026-04-07 05:38:24.676873 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-07 05:38:24.676884 | orchestrator | Tuesday 07 April 2026 05:38:13 +0000 (0:00:00.137) 0:03:52.433 ********* 2026-04-07 05:38:24.676895 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676906 | orchestrator | 2026-04-07 05:38:24.676917 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-07 05:38:24.676928 | orchestrator | Tuesday 07 April 2026 05:38:13 +0000 (0:00:00.172) 0:03:52.606 ********* 2026-04-07 05:38:24.676939 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.676950 | orchestrator | 2026-04-07 05:38:24.676961 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-07 05:38:24.676990 | orchestrator | Tuesday 07 April 2026 05:38:13 +0000 (0:00:00.179) 0:03:52.785 ********* 2026-04-07 05:38:24.677002 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677013 | orchestrator | 2026-04-07 05:38:24.677025 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-07 05:38:24.677036 | orchestrator | Tuesday 07 April 2026 05:38:14 +0000 (0:00:00.139) 0:03:52.925 ********* 2026-04-07 05:38:24.677047 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677058 | orchestrator | 2026-04-07 05:38:24.677069 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-07 05:38:24.677080 | orchestrator | Tuesday 07 April 2026 05:38:14 +0000 (0:00:00.148) 0:03:53.074 ********* 2026-04-07 05:38:24.677091 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:24.677102 | orchestrator | 2026-04-07 05:38:24.677114 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-07 05:38:24.677133 | orchestrator | Tuesday 07 April 2026 05:38:16 +0000 (0:00:01.939) 0:03:55.013 ********* 2026-04-07 05:38:24.677144 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:24.677155 | orchestrator | 2026-04-07 05:38:24.677166 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-07 05:38:24.677177 | orchestrator | Tuesday 07 April 2026 05:38:16 +0000 (0:00:00.146) 0:03:55.160 ********* 2026-04-07 05:38:24.677189 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-07 05:38:24.677200 | orchestrator | 2026-04-07 05:38:24.677211 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-07 05:38:24.677222 | orchestrator | Tuesday 07 April 2026 05:38:16 +0000 (0:00:00.569) 0:03:55.729 ********* 2026-04-07 05:38:24.677233 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677244 | orchestrator | 2026-04-07 05:38:24.677256 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-07 05:38:24.677267 | orchestrator | Tuesday 07 April 2026 05:38:17 +0000 (0:00:00.166) 0:03:55.896 ********* 2026-04-07 05:38:24.677278 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677289 | orchestrator | 2026-04-07 05:38:24.677320 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-07 05:38:24.677331 | orchestrator | Tuesday 07 April 2026 05:38:17 +0000 (0:00:00.137) 0:03:56.034 ********* 2026-04-07 05:38:24.677342 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677354 | orchestrator | 2026-04-07 05:38:24.677365 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-07 05:38:24.677377 | orchestrator | Tuesday 07 April 2026 05:38:17 +0000 (0:00:00.182) 0:03:56.217 ********* 2026-04-07 05:38:24.677388 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677399 | orchestrator | 2026-04-07 05:38:24.677410 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-07 05:38:24.677422 | orchestrator | Tuesday 07 April 2026 05:38:17 +0000 (0:00:00.167) 0:03:56.384 ********* 2026-04-07 05:38:24.677433 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677444 | orchestrator | 2026-04-07 05:38:24.677455 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-07 05:38:24.677466 | orchestrator | Tuesday 07 April 2026 05:38:17 +0000 (0:00:00.168) 0:03:56.552 ********* 2026-04-07 05:38:24.677478 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677489 | orchestrator | 2026-04-07 05:38:24.677500 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-07 05:38:24.677511 | orchestrator | Tuesday 07 April 2026 05:38:17 +0000 (0:00:00.147) 0:03:56.699 ********* 2026-04-07 05:38:24.677522 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677534 | orchestrator | 2026-04-07 05:38:24.677545 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-07 05:38:24.677556 | orchestrator | Tuesday 07 April 2026 05:38:18 +0000 (0:00:00.157) 0:03:56.857 ********* 2026-04-07 05:38:24.677567 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:24.677578 | orchestrator | 2026-04-07 05:38:24.677590 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-07 05:38:24.677601 | orchestrator | Tuesday 07 April 2026 05:38:18 +0000 (0:00:00.134) 0:03:56.991 ********* 2026-04-07 05:38:24.677612 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:24.677623 | orchestrator | 2026-04-07 05:38:24.677634 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-07 05:38:24.677645 | orchestrator | Tuesday 07 April 2026 05:38:18 +0000 (0:00:00.495) 0:03:57.486 ********* 2026-04-07 05:38:24.677657 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-07 05:38:24.677668 | orchestrator | 2026-04-07 05:38:24.677679 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-07 05:38:24.677690 | orchestrator | Tuesday 07 April 2026 05:38:19 +0000 (0:00:00.613) 0:03:58.100 ********* 2026-04-07 05:38:24.677701 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-07 05:38:24.677725 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-07 05:38:24.677737 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-07 05:38:24.677749 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-07 05:38:24.677760 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-07 05:38:24.677771 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-07 05:38:24.677782 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-07 05:38:24.677793 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-07 05:38:24.677804 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-07 05:38:24.677815 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-07 05:38:24.677827 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-07 05:38:24.677838 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-07 05:38:24.677849 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-07 05:38:24.677860 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-07 05:38:24.677877 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-07 05:38:38.445852 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-07 05:38:38.445975 | orchestrator | 2026-04-07 05:38:38.445995 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-07 05:38:38.446009 | orchestrator | Tuesday 07 April 2026 05:38:25 +0000 (0:00:05.910) 0:04:04.010 ********* 2026-04-07 05:38:38.446088 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446101 | orchestrator | 2026-04-07 05:38:38.446113 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-07 05:38:38.446125 | orchestrator | Tuesday 07 April 2026 05:38:25 +0000 (0:00:00.139) 0:04:04.150 ********* 2026-04-07 05:38:38.446136 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446146 | orchestrator | 2026-04-07 05:38:38.446157 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-07 05:38:38.446168 | orchestrator | Tuesday 07 April 2026 05:38:25 +0000 (0:00:00.132) 0:04:04.283 ********* 2026-04-07 05:38:38.446179 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446190 | orchestrator | 2026-04-07 05:38:38.446201 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-07 05:38:38.446212 | orchestrator | Tuesday 07 April 2026 05:38:25 +0000 (0:00:00.139) 0:04:04.423 ********* 2026-04-07 05:38:38.446223 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446234 | orchestrator | 2026-04-07 05:38:38.446245 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-07 05:38:38.446255 | orchestrator | Tuesday 07 April 2026 05:38:25 +0000 (0:00:00.134) 0:04:04.557 ********* 2026-04-07 05:38:38.446266 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446277 | orchestrator | 2026-04-07 05:38:38.446326 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-07 05:38:38.446337 | orchestrator | Tuesday 07 April 2026 05:38:25 +0000 (0:00:00.149) 0:04:04.707 ********* 2026-04-07 05:38:38.446348 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446361 | orchestrator | 2026-04-07 05:38:38.446374 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-07 05:38:38.446388 | orchestrator | Tuesday 07 April 2026 05:38:26 +0000 (0:00:00.154) 0:04:04.861 ********* 2026-04-07 05:38:38.446401 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446414 | orchestrator | 2026-04-07 05:38:38.446426 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-07 05:38:38.446439 | orchestrator | Tuesday 07 April 2026 05:38:26 +0000 (0:00:00.124) 0:04:04.985 ********* 2026-04-07 05:38:38.446451 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446464 | orchestrator | 2026-04-07 05:38:38.446478 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-07 05:38:38.446518 | orchestrator | Tuesday 07 April 2026 05:38:26 +0000 (0:00:00.124) 0:04:05.109 ********* 2026-04-07 05:38:38.446530 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446542 | orchestrator | 2026-04-07 05:38:38.446555 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-07 05:38:38.446568 | orchestrator | Tuesday 07 April 2026 05:38:26 +0000 (0:00:00.129) 0:04:05.239 ********* 2026-04-07 05:38:38.446580 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446593 | orchestrator | 2026-04-07 05:38:38.446606 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-07 05:38:38.446619 | orchestrator | Tuesday 07 April 2026 05:38:26 +0000 (0:00:00.406) 0:04:05.645 ********* 2026-04-07 05:38:38.446631 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446644 | orchestrator | 2026-04-07 05:38:38.446657 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-07 05:38:38.446669 | orchestrator | Tuesday 07 April 2026 05:38:26 +0000 (0:00:00.137) 0:04:05.782 ********* 2026-04-07 05:38:38.446681 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446694 | orchestrator | 2026-04-07 05:38:38.446707 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-07 05:38:38.446719 | orchestrator | Tuesday 07 April 2026 05:38:27 +0000 (0:00:00.149) 0:04:05.932 ********* 2026-04-07 05:38:38.446730 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446740 | orchestrator | 2026-04-07 05:38:38.446751 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-07 05:38:38.446762 | orchestrator | Tuesday 07 April 2026 05:38:27 +0000 (0:00:00.225) 0:04:06.158 ********* 2026-04-07 05:38:38.446773 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446784 | orchestrator | 2026-04-07 05:38:38.446795 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-07 05:38:38.446806 | orchestrator | Tuesday 07 April 2026 05:38:27 +0000 (0:00:00.141) 0:04:06.300 ********* 2026-04-07 05:38:38.446816 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446827 | orchestrator | 2026-04-07 05:38:38.446851 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-07 05:38:38.446862 | orchestrator | Tuesday 07 April 2026 05:38:27 +0000 (0:00:00.228) 0:04:06.528 ********* 2026-04-07 05:38:38.446873 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446884 | orchestrator | 2026-04-07 05:38:38.446895 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-07 05:38:38.446906 | orchestrator | Tuesday 07 April 2026 05:38:27 +0000 (0:00:00.138) 0:04:06.667 ********* 2026-04-07 05:38:38.446917 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446928 | orchestrator | 2026-04-07 05:38:38.446939 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-07 05:38:38.446952 | orchestrator | Tuesday 07 April 2026 05:38:27 +0000 (0:00:00.131) 0:04:06.798 ********* 2026-04-07 05:38:38.446963 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.446973 | orchestrator | 2026-04-07 05:38:38.446984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-07 05:38:38.446995 | orchestrator | Tuesday 07 April 2026 05:38:28 +0000 (0:00:00.136) 0:04:06.935 ********* 2026-04-07 05:38:38.447006 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447017 | orchestrator | 2026-04-07 05:38:38.447046 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-07 05:38:38.447058 | orchestrator | Tuesday 07 April 2026 05:38:28 +0000 (0:00:00.134) 0:04:07.069 ********* 2026-04-07 05:38:38.447069 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447080 | orchestrator | 2026-04-07 05:38:38.447091 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-07 05:38:38.447101 | orchestrator | Tuesday 07 April 2026 05:38:28 +0000 (0:00:00.143) 0:04:07.213 ********* 2026-04-07 05:38:38.447112 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447130 | orchestrator | 2026-04-07 05:38:38.447141 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-07 05:38:38.447152 | orchestrator | Tuesday 07 April 2026 05:38:28 +0000 (0:00:00.143) 0:04:07.357 ********* 2026-04-07 05:38:38.447163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 05:38:38.447174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 05:38:38.447184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 05:38:38.447195 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447206 | orchestrator | 2026-04-07 05:38:38.447217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-07 05:38:38.447227 | orchestrator | Tuesday 07 April 2026 05:38:29 +0000 (0:00:00.718) 0:04:08.075 ********* 2026-04-07 05:38:38.447238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 05:38:38.447249 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 05:38:38.447259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 05:38:38.447270 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447280 | orchestrator | 2026-04-07 05:38:38.447308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-07 05:38:38.447319 | orchestrator | Tuesday 07 April 2026 05:38:30 +0000 (0:00:01.056) 0:04:09.132 ********* 2026-04-07 05:38:38.447329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-07 05:38:38.447340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-07 05:38:38.447350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-07 05:38:38.447361 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447372 | orchestrator | 2026-04-07 05:38:38.447383 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-07 05:38:38.447393 | orchestrator | Tuesday 07 April 2026 05:38:30 +0000 (0:00:00.408) 0:04:09.540 ********* 2026-04-07 05:38:38.447404 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447415 | orchestrator | 2026-04-07 05:38:38.447425 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-07 05:38:38.447436 | orchestrator | Tuesday 07 April 2026 05:38:30 +0000 (0:00:00.153) 0:04:09.694 ********* 2026-04-07 05:38:38.447447 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-07 05:38:38.447457 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447468 | orchestrator | 2026-04-07 05:38:38.447479 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-07 05:38:38.447489 | orchestrator | Tuesday 07 April 2026 05:38:31 +0000 (0:00:00.619) 0:04:10.314 ********* 2026-04-07 05:38:38.447500 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:38.447511 | orchestrator | 2026-04-07 05:38:38.447522 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-07 05:38:38.447533 | orchestrator | Tuesday 07 April 2026 05:38:32 +0000 (0:00:01.007) 0:04:11.321 ********* 2026-04-07 05:38:38.447543 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:38.447554 | orchestrator | 2026-04-07 05:38:38.447565 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-07 05:38:38.447575 | orchestrator | Tuesday 07 April 2026 05:38:32 +0000 (0:00:00.147) 0:04:11.469 ********* 2026-04-07 05:38:38.447586 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-04-07 05:38:38.447597 | orchestrator | 2026-04-07 05:38:38.447607 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-07 05:38:38.447618 | orchestrator | Tuesday 07 April 2026 05:38:33 +0000 (0:00:00.630) 0:04:12.099 ********* 2026-04-07 05:38:38.447628 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-07 05:38:38.447639 | orchestrator | 2026-04-07 05:38:38.447650 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-07 05:38:38.447660 | orchestrator | Tuesday 07 April 2026 05:38:35 +0000 (0:00:02.152) 0:04:14.252 ********* 2026-04-07 05:38:38.447678 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:38:38.447689 | orchestrator | 2026-04-07 05:38:38.447700 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-07 05:38:38.447711 | orchestrator | Tuesday 07 April 2026 05:38:35 +0000 (0:00:00.198) 0:04:14.450 ********* 2026-04-07 05:38:38.447727 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:38.447738 | orchestrator | 2026-04-07 05:38:38.447749 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-07 05:38:38.447760 | orchestrator | Tuesday 07 April 2026 05:38:35 +0000 (0:00:00.168) 0:04:14.618 ********* 2026-04-07 05:38:38.447770 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:38.447781 | orchestrator | 2026-04-07 05:38:38.447792 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-07 05:38:38.447802 | orchestrator | Tuesday 07 April 2026 05:38:36 +0000 (0:00:00.437) 0:04:15.055 ********* 2026-04-07 05:38:38.447813 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:38:38.447824 | orchestrator | 2026-04-07 05:38:38.447835 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-07 05:38:38.447845 | orchestrator | Tuesday 07 April 2026 05:38:37 +0000 (0:00:01.049) 0:04:16.105 ********* 2026-04-07 05:38:38.447856 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:38.447866 | orchestrator | 2026-04-07 05:38:38.447877 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-07 05:38:38.447888 | orchestrator | Tuesday 07 April 2026 05:38:37 +0000 (0:00:00.618) 0:04:16.724 ********* 2026-04-07 05:38:38.447898 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:38:38.447909 | orchestrator | 2026-04-07 05:38:38.447927 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-07 05:39:10.571134 | orchestrator | Tuesday 07 April 2026 05:38:38 +0000 (0:00:00.548) 0:04:17.273 ********* 2026-04-07 05:39:10.571316 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.571337 | orchestrator | 2026-04-07 05:39:10.571351 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-07 05:39:10.571363 | orchestrator | Tuesday 07 April 2026 05:38:38 +0000 (0:00:00.463) 0:04:17.737 ********* 2026-04-07 05:39:10.571374 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.571385 | orchestrator | 2026-04-07 05:39:10.571396 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-07 05:39:10.571407 | orchestrator | Tuesday 07 April 2026 05:38:39 +0000 (0:00:00.756) 0:04:18.494 ********* 2026-04-07 05:39:10.571418 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.571429 | orchestrator | 2026-04-07 05:39:10.571440 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-07 05:39:10.571451 | orchestrator | Tuesday 07 April 2026 05:38:40 +0000 (0:00:00.714) 0:04:19.208 ********* 2026-04-07 05:39:10.571463 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-07 05:39:10.571475 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-07 05:39:10.571486 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-07 05:39:10.571497 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-07 05:39:10.571508 | orchestrator | 2026-04-07 05:39:10.571519 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-07 05:39:10.571530 | orchestrator | Tuesday 07 April 2026 05:38:43 +0000 (0:00:02.850) 0:04:22.059 ********* 2026-04-07 05:39:10.571541 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:39:10.571551 | orchestrator | 2026-04-07 05:39:10.571562 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-07 05:39:10.571573 | orchestrator | Tuesday 07 April 2026 05:38:44 +0000 (0:00:01.043) 0:04:23.102 ********* 2026-04-07 05:39:10.571584 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.571595 | orchestrator | 2026-04-07 05:39:10.571606 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-07 05:39:10.571617 | orchestrator | Tuesday 07 April 2026 05:38:44 +0000 (0:00:00.145) 0:04:23.248 ********* 2026-04-07 05:39:10.571628 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.571665 | orchestrator | 2026-04-07 05:39:10.571679 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-07 05:39:10.571691 | orchestrator | Tuesday 07 April 2026 05:38:44 +0000 (0:00:00.149) 0:04:23.397 ********* 2026-04-07 05:39:10.571704 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.571718 | orchestrator | 2026-04-07 05:39:10.571730 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-07 05:39:10.571743 | orchestrator | Tuesday 07 April 2026 05:38:45 +0000 (0:00:01.101) 0:04:24.499 ********* 2026-04-07 05:39:10.571756 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.571766 | orchestrator | 2026-04-07 05:39:10.571777 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-07 05:39:10.571788 | orchestrator | Tuesday 07 April 2026 05:38:46 +0000 (0:00:00.471) 0:04:24.971 ********* 2026-04-07 05:39:10.571799 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:39:10.571809 | orchestrator | 2026-04-07 05:39:10.571820 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-07 05:39:10.571831 | orchestrator | Tuesday 07 April 2026 05:38:46 +0000 (0:00:00.413) 0:04:25.385 ********* 2026-04-07 05:39:10.571842 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-04-07 05:39:10.571854 | orchestrator | 2026-04-07 05:39:10.571865 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-07 05:39:10.571875 | orchestrator | Tuesday 07 April 2026 05:38:47 +0000 (0:00:00.572) 0:04:25.957 ********* 2026-04-07 05:39:10.571886 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:39:10.571897 | orchestrator | 2026-04-07 05:39:10.571907 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-07 05:39:10.571918 | orchestrator | Tuesday 07 April 2026 05:38:47 +0000 (0:00:00.141) 0:04:26.099 ********* 2026-04-07 05:39:10.571929 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:39:10.571939 | orchestrator | 2026-04-07 05:39:10.571950 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-07 05:39:10.571961 | orchestrator | Tuesday 07 April 2026 05:38:47 +0000 (0:00:00.145) 0:04:26.244 ********* 2026-04-07 05:39:10.571972 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-04-07 05:39:10.571982 | orchestrator | 2026-04-07 05:39:10.571993 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-07 05:39:10.572004 | orchestrator | Tuesday 07 April 2026 05:38:47 +0000 (0:00:00.586) 0:04:26.831 ********* 2026-04-07 05:39:10.572015 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.572026 | orchestrator | 2026-04-07 05:39:10.572050 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-07 05:39:10.572061 | orchestrator | Tuesday 07 April 2026 05:38:49 +0000 (0:00:01.263) 0:04:28.095 ********* 2026-04-07 05:39:10.572072 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.572083 | orchestrator | 2026-04-07 05:39:10.572093 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-07 05:39:10.572104 | orchestrator | Tuesday 07 April 2026 05:38:50 +0000 (0:00:00.997) 0:04:29.092 ********* 2026-04-07 05:39:10.572115 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.572126 | orchestrator | 2026-04-07 05:39:10.572136 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-07 05:39:10.572147 | orchestrator | Tuesday 07 April 2026 05:38:51 +0000 (0:00:01.396) 0:04:30.489 ********* 2026-04-07 05:39:10.572158 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:39:10.572169 | orchestrator | 2026-04-07 05:39:10.572180 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-07 05:39:10.572190 | orchestrator | Tuesday 07 April 2026 05:38:53 +0000 (0:00:02.321) 0:04:32.810 ********* 2026-04-07 05:39:10.572201 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-04-07 05:39:10.572211 | orchestrator | 2026-04-07 05:39:10.572239 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-07 05:39:10.572282 | orchestrator | Tuesday 07 April 2026 05:38:54 +0000 (0:00:00.617) 0:04:33.427 ********* 2026-04-07 05:39:10.572294 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.572304 | orchestrator | 2026-04-07 05:39:10.572315 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-07 05:39:10.572326 | orchestrator | Tuesday 07 April 2026 05:38:56 +0000 (0:00:01.555) 0:04:34.983 ********* 2026-04-07 05:39:10.572337 | orchestrator | ok: [testbed-node-0] 2026-04-07 05:39:10.572347 | orchestrator | 2026-04-07 05:39:10.572358 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-07 05:39:10.572369 | orchestrator | Tuesday 07 April 2026 05:38:58 +0000 (0:00:02.034) 0:04:37.018 ********* 2026-04-07 05:39:10.572380 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:39:10.572391 | orchestrator | 2026-04-07 05:39:10.572401 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-07 05:39:10.572412 | orchestrator | Tuesday 07 April 2026 05:38:58 +0000 (0:00:00.142) 0:04:37.161 ********* 2026-04-07 05:39:10.572425 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-07 05:39:10.572439 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-07 05:39:10.572451 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-07 05:39:10.572462 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-07 05:39:10.572475 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-07 05:39:10.572487 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0795a0d773d3ffdc3845ab8444602e5bcb0a8e45'}])  2026-04-07 05:39:10.572500 | orchestrator | 2026-04-07 05:39:10.572511 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-07 05:39:10.572522 | orchestrator | Tuesday 07 April 2026 05:39:07 +0000 (0:00:09.060) 0:04:46.222 ********* 2026-04-07 05:39:10.572538 | orchestrator | changed: [testbed-node-0] 2026-04-07 05:39:10.572550 | orchestrator | 2026-04-07 05:39:10.572561 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-07 05:39:10.572571 | orchestrator | Tuesday 07 April 2026 05:39:08 +0000 (0:00:01.552) 0:04:47.774 ********* 2026-04-07 05:39:10.572589 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-07 05:39:10.572600 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-07 05:39:10.572610 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-07 05:39:10.572621 | orchestrator | 2026-04-07 05:39:10.572632 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-07 05:39:10.572643 | orchestrator | Tuesday 07 April 2026 05:39:10 +0000 (0:00:01.160) 0:04:48.935 ********* 2026-04-07 05:39:10.572653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-07 05:39:10.572664 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-07 05:39:10.572675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-07 05:39:10.572685 | orchestrator | skipping: [testbed-node-0] 2026-04-07 05:39:10.572696 | orchestrator | 2026-04-07 05:39:10.572707 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-07 05:39:10.572724 | orchestrator | Tuesday 07 April 2026 05:39:10 +0000 (0:00:00.465) 0:04:49.400 ********* 2026-04-07 06:10:29.782584 | orchestrator | skipping: [testbed-node-0] 2026-04-07 06:10:29.782693 | orchestrator | 2026-04-07 06:10:29.782710 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-07 06:10:29.782722 | orchestrator | Tuesday 07 April 2026 05:39:10 +0000 (0:00:00.131) 0:04:49.531 ********* 2026-04-07 06:10:29.782733 | orchestrator | 2026-04-07 06:10:29.782744 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782755 | orchestrator | 2026-04-07 06:10:29.782765 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782775 | orchestrator | 2026-04-07 06:10:29.782785 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782795 | orchestrator | 2026-04-07 06:10:29.782806 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782816 | orchestrator | 2026-04-07 06:10:29.782883 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782894 | orchestrator | 2026-04-07 06:10:29.782904 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782914 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-04-07 06:10:29.782925 | orchestrator | 2026-04-07 06:10:29.782935 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782945 | orchestrator | 2026-04-07 06:10:29.782955 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782964 | orchestrator | 2026-04-07 06:10:29.782974 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.782984 | orchestrator | 2026-04-07 06:10:29.782994 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783003 | orchestrator | 2026-04-07 06:10:29.783013 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783022 | orchestrator | 2026-04-07 06:10:29.783032 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783041 | orchestrator | 2026-04-07 06:10:29.783051 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783061 | orchestrator | 2026-04-07 06:10:29.783071 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783080 | orchestrator | 2026-04-07 06:10:29.783090 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783123 | orchestrator | 2026-04-07 06:10:29.783136 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783147 | orchestrator | 2026-04-07 06:10:29.783158 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783169 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-04-07 06:10:29.783181 | orchestrator | 2026-04-07 06:10:29.783192 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783204 | orchestrator | 2026-04-07 06:10:29.783215 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783229 | orchestrator | 2026-04-07 06:10:29.783247 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783264 | orchestrator | 2026-04-07 06:10:29.783275 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783286 | orchestrator | 2026-04-07 06:10:29.783297 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783308 | orchestrator | 2026-04-07 06:10:29.783319 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783331 | orchestrator | 2026-04-07 06:10:29.783343 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783354 | orchestrator | 2026-04-07 06:10:29.783379 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783391 | orchestrator | 2026-04-07 06:10:29.783402 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783414 | orchestrator | 2026-04-07 06:10:29.783425 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783437 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-04-07 06:10:29.783447 | orchestrator | 2026-04-07 06:10:29.783459 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783471 | orchestrator | 2026-04-07 06:10:29.783482 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783492 | orchestrator | 2026-04-07 06:10:29.783502 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783511 | orchestrator | 2026-04-07 06:10:29.783521 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783530 | orchestrator | 2026-04-07 06:10:29.783557 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783568 | orchestrator | 2026-04-07 06:10:29.783577 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783587 | orchestrator | 2026-04-07 06:10:29.783596 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783606 | orchestrator | 2026-04-07 06:10:29.783615 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783625 | orchestrator | 2026-04-07 06:10:29.783635 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783644 | orchestrator | 2026-04-07 06:10:29.783654 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783663 | orchestrator | 2026-04-07 06:10:29.783673 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783690 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-04-07 06:10:29.783700 | orchestrator | 2026-04-07 06:10:29.783710 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783719 | orchestrator | 2026-04-07 06:10:29.783729 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783738 | orchestrator | 2026-04-07 06:10:29.783748 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783757 | orchestrator | 2026-04-07 06:10:29.783767 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783776 | orchestrator | 2026-04-07 06:10:29.783786 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783795 | orchestrator | 2026-04-07 06:10:29.783805 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783815 | orchestrator | 2026-04-07 06:10:29.783844 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783855 | orchestrator | 2026-04-07 06:10:29.783864 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783874 | orchestrator | 2026-04-07 06:10:29.783883 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783893 | orchestrator | 2026-04-07 06:10:29.783903 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783913 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-04-07 06:10:29.783922 | orchestrator | 2026-04-07 06:10:29.783932 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783941 | orchestrator | 2026-04-07 06:10:29.783951 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783961 | orchestrator | 2026-04-07 06:10:29.783970 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783980 | orchestrator | 2026-04-07 06:10:29.783990 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.783999 | orchestrator | 2026-04-07 06:10:29.784009 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.784018 | orchestrator | 2026-04-07 06:10:29.784028 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.784037 | orchestrator | 2026-04-07 06:10:29.784047 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.784057 | orchestrator | 2026-04-07 06:10:29.784066 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.784076 | orchestrator | 2026-04-07 06:10:29.784085 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.784095 | orchestrator | 2026-04-07 06:10:29.784105 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.784114 | orchestrator | 2026-04-07 06:10:29.784129 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-04-07 06:10:29.784142 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.10", "quorum_status", "--format", "json"], "delta": "0:05:00.356400", "end": "2026-04-07 06:10:29.700219", "msg": "non-zero return code", "rc": 1, "start": "2026-04-07 06:05:29.343819", "stderr": "2026-04-07T06:10:29.678+0000 7ffb1840f640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-04-07T06:10:29.678+0000 7ffb1840f640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-04-07 06:10:29.784162 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-04-07 06:10:29.784179 | orchestrator | (): 'e1fdfe52-c0bc-9892-0127-000000000297' 2026-04-07 06:10:34.379919 | orchestrator | 2026-04-07 06:10:34.379945 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-04-07 06:10:34.379959 | orchestrator | Tuesday 07 April 2026 06:10:29 +0000 (0:31:19.244) 0:36:08.776 ********* 2026-04-07 06:10:34.379972 | orchestrator | ok: [testbed-node-0] 2026-04-07 06:10:34.379985 | orchestrator | 2026-04-07 06:10:34.379998 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-04-07 06:10:34.380009 | orchestrator | Tuesday 07 April 2026 06:10:31 +0000 (0:00:01.828) 0:36:10.605 ********* 2026-04-07 06:10:34.380022 | orchestrator | ok: [testbed-node-0] 2026-04-07 06:10:34.380034 | orchestrator | 2026-04-07 06:10:34.380047 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-04-07 06:10:34.380059 | orchestrator | Tuesday 07 April 2026 06:10:32 +0000 (0:00:00.741) 0:36:11.347 ********* 2026-04-07 06:10:34.380072 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-04-07 06:10:34.380086 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-04-07 06:10:34.380098 | orchestrator | (): 'e1fdfe52-c0bc-9892-0127-0000000002a2' 2026-04-07 06:10:34.380124 | orchestrator | 2026-04-07 06:10:34.380139 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-07 06:10:34.380152 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-07 06:10:34.380161 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-04-07 06:10:34.380168 | orchestrator | testbed-node-0 : ok=121  changed=7  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-04-07 06:10:34.380177 | orchestrator | testbed-node-1 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-04-07 06:10:34.380185 | orchestrator | testbed-node-2 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-04-07 06:10:34.380192 | orchestrator | testbed-node-3 : ok=33  changed=1  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-04-07 06:10:34.380199 | orchestrator | testbed-node-4 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-04-07 06:10:34.380206 | orchestrator | testbed-node-5 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-04-07 06:10:34.380213 | orchestrator | 2026-04-07 06:10:34.380220 | orchestrator | 2026-04-07 06:10:34.380227 | orchestrator | 2026-04-07 06:10:34.380234 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-07 06:10:34.380265 | orchestrator | Tuesday 07 April 2026 06:10:33 +0000 (0:00:01.145) 0:36:12.492 ********* 2026-04-07 06:10:34.380273 | orchestrator | =============================================================================== 2026-04-07 06:10:34.380280 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1879.24s 2026-04-07 06:10:34.380287 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.59s 2026-04-07 06:10:34.380294 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 11.95s 2026-04-07 06:10:34.380302 | orchestrator | Set cluster configs ---------------------------------------------------- 10.02s 2026-04-07 06:10:34.380310 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.00s 2026-04-07 06:10:34.380318 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 9.06s 2026-04-07 06:10:34.380339 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.91s 2026-04-07 06:10:34.380348 | orchestrator | Gather facts ------------------------------------------------------------ 4.18s 2026-04-07 06:10:34.380357 | orchestrator | Stop ceph mon ----------------------------------------------------------- 3.00s 2026-04-07 06:10:34.380365 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 2.85s 2026-04-07 06:10:34.380374 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 2.61s 2026-04-07 06:10:34.380382 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 2.41s 2026-04-07 06:10:34.380391 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.36s 2026-04-07 06:10:34.380399 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 2.35s 2026-04-07 06:10:34.380407 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 2.32s 2026-04-07 06:10:34.380415 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.23s 2026-04-07 06:10:34.380423 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 2.15s 2026-04-07 06:10:34.380432 | orchestrator | ceph-validate : Validate virtual_ips length ----------------------------- 2.09s 2026-04-07 06:10:34.380458 | orchestrator | ceph-container-engine : Include pre_requisites/prerequisites.yml -------- 2.06s 2026-04-07 06:10:34.380467 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 2.03s 2026-04-07 06:10:35.066647 | orchestrator | ERROR 2026-04-07 06:10:35.067213 | orchestrator | { 2026-04-07 06:10:35.067323 | orchestrator | "delta": "2:05:54.141942", 2026-04-07 06:10:35.067392 | orchestrator | "end": "2026-04-07 06:10:34.587997", 2026-04-07 06:10:35.067450 | orchestrator | "msg": "non-zero return code", 2026-04-07 06:10:35.067504 | orchestrator | "rc": 2, 2026-04-07 06:10:35.067554 | orchestrator | "start": "2026-04-07 04:04:40.446055" 2026-04-07 06:10:35.067604 | orchestrator | } failure 2026-04-07 06:10:35.270755 | 2026-04-07 06:10:35.270905 | PLAY RECAP 2026-04-07 06:10:35.270968 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-04-07 06:10:35.270995 | 2026-04-07 06:10:35.493422 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-04-07 06:10:35.495933 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-07 06:10:36.234052 | 2026-04-07 06:10:36.234214 | PLAY [Post output play] 2026-04-07 06:10:36.251333 | 2026-04-07 06:10:36.251491 | LOOP [stage-output : Register sources] 2026-04-07 06:10:36.324254 | 2026-04-07 06:10:36.324569 | TASK [stage-output : Check sudo] 2026-04-07 06:10:37.158180 | orchestrator | sudo: a password is required 2026-04-07 06:10:37.363736 | orchestrator | ok: Runtime: 0:00:00.017007 2026-04-07 06:10:37.379801 | 2026-04-07 06:10:37.379995 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-07 06:10:37.419065 | 2026-04-07 06:10:37.419357 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-07 06:10:37.495830 | orchestrator | ok 2026-04-07 06:10:37.503815 | 2026-04-07 06:10:37.504008 | LOOP [stage-output : Ensure target folders exist] 2026-04-07 06:10:37.952320 | orchestrator | ok: "docs" 2026-04-07 06:10:37.952639 | 2026-04-07 06:10:38.192124 | orchestrator | ok: "artifacts" 2026-04-07 06:10:38.429351 | orchestrator | ok: "logs" 2026-04-07 06:10:38.453505 | 2026-04-07 06:10:38.453686 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-07 06:10:38.491460 | 2026-04-07 06:10:38.491756 | TASK [stage-output : Make all log files readable] 2026-04-07 06:10:38.775714 | orchestrator | ok 2026-04-07 06:10:38.785341 | 2026-04-07 06:10:38.785502 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-07 06:10:38.820476 | orchestrator | skipping: Conditional result was False 2026-04-07 06:10:38.832348 | 2026-04-07 06:10:38.832507 | TASK [stage-output : Discover log files for compression] 2026-04-07 06:10:38.856737 | orchestrator | skipping: Conditional result was False 2026-04-07 06:10:38.867432 | 2026-04-07 06:10:38.867602 | LOOP [stage-output : Archive everything from logs] 2026-04-07 06:10:38.909020 | 2026-04-07 06:10:38.909276 | PLAY [Post cleanup play] 2026-04-07 06:10:38.918680 | 2026-04-07 06:10:38.918813 | TASK [Set cloud fact (Zuul deployment)] 2026-04-07 06:10:38.981676 | orchestrator | ok 2026-04-07 06:10:38.992622 | 2026-04-07 06:10:38.992753 | TASK [Set cloud fact (local deployment)] 2026-04-07 06:10:39.026696 | orchestrator | skipping: Conditional result was False 2026-04-07 06:10:39.037816 | 2026-04-07 06:10:39.037975 | TASK [Clean the cloud environment] 2026-04-07 06:10:39.632042 | orchestrator | 2026-04-07 06:10:39 - clean up servers 2026-04-07 06:10:40.411524 | orchestrator | 2026-04-07 06:10:40 - testbed-manager 2026-04-07 06:10:40.505456 | orchestrator | 2026-04-07 06:10:40 - testbed-node-2 2026-04-07 06:10:40.595802 | orchestrator | 2026-04-07 06:10:40 - testbed-node-1 2026-04-07 06:10:40.685724 | orchestrator | 2026-04-07 06:10:40 - testbed-node-3 2026-04-07 06:10:40.783182 | orchestrator | 2026-04-07 06:10:40 - testbed-node-5 2026-04-07 06:10:40.875403 | orchestrator | 2026-04-07 06:10:40 - testbed-node-4 2026-04-07 06:10:40.971032 | orchestrator | 2026-04-07 06:10:40 - testbed-node-0 2026-04-07 06:10:41.068235 | orchestrator | 2026-04-07 06:10:41 - clean up keypairs 2026-04-07 06:10:41.087917 | orchestrator | 2026-04-07 06:10:41 - testbed 2026-04-07 06:10:41.119538 | orchestrator | 2026-04-07 06:10:41 - wait for servers to be gone 2026-04-07 06:10:49.841109 | orchestrator | 2026-04-07 06:10:49 - clean up ports 2026-04-07 06:10:50.020776 | orchestrator | 2026-04-07 06:10:50 - 1a2cbbfb-f811-4329-a1f8-8fb019f8cbf8 2026-04-07 06:10:50.274627 | orchestrator | 2026-04-07 06:10:50 - 29db1d6e-50d1-481d-87a7-2d8746850f87 2026-04-07 06:10:50.639448 | orchestrator | 2026-04-07 06:10:50 - 2ecc77c0-11d9-4514-aa7e-5daf1a68f11f 2026-04-07 06:10:51.027389 | orchestrator | 2026-04-07 06:10:51 - 8aff9767-7f1d-4772-9617-8bc22f2fa0f6 2026-04-07 06:10:51.269657 | orchestrator | 2026-04-07 06:10:51 - b9b697c0-419e-49a6-b335-9ea013ad97a0 2026-04-07 06:10:51.520799 | orchestrator | 2026-04-07 06:10:51 - c8032b9c-581d-4396-8cbd-fca097adcd51 2026-04-07 06:10:51.733140 | orchestrator | 2026-04-07 06:10:51 - d3fac3b3-3019-41bd-bd7e-b12911910e82 2026-04-07 06:10:51.942210 | orchestrator | 2026-04-07 06:10:51 - clean up volumes 2026-04-07 06:10:52.085090 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-0-node-base 2026-04-07 06:10:52.122152 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-4-node-base 2026-04-07 06:10:52.156945 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-2-node-base 2026-04-07 06:10:52.202378 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-3-node-base 2026-04-07 06:10:52.245308 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-1-node-base 2026-04-07 06:10:52.288048 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-5-node-base 2026-04-07 06:10:52.332461 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-manager-base 2026-04-07 06:10:52.374131 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-0-node-3 2026-04-07 06:10:52.416190 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-7-node-4 2026-04-07 06:10:52.456770 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-2-node-5 2026-04-07 06:10:52.499961 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-5-node-5 2026-04-07 06:10:52.544004 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-3-node-3 2026-04-07 06:10:52.584181 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-6-node-3 2026-04-07 06:10:52.625211 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-1-node-4 2026-04-07 06:10:52.669046 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-4-node-4 2026-04-07 06:10:52.712521 | orchestrator | 2026-04-07 06:10:52 - testbed-volume-8-node-5 2026-04-07 06:10:52.759322 | orchestrator | 2026-04-07 06:10:52 - disconnect routers 2026-04-07 06:10:53.386227 | orchestrator | 2026-04-07 06:10:53 - testbed 2026-04-07 06:10:54.552433 | orchestrator | 2026-04-07 06:10:54 - clean up subnets 2026-04-07 06:10:54.605001 | orchestrator | 2026-04-07 06:10:54 - subnet-testbed-management 2026-04-07 06:10:54.757048 | orchestrator | 2026-04-07 06:10:54 - clean up networks 2026-04-07 06:10:54.932162 | orchestrator | 2026-04-07 06:10:54 - net-testbed-management 2026-04-07 06:10:55.714204 | orchestrator | 2026-04-07 06:10:55 - clean up security groups 2026-04-07 06:10:55.754313 | orchestrator | 2026-04-07 06:10:55 - testbed-node 2026-04-07 06:10:55.866277 | orchestrator | 2026-04-07 06:10:55 - testbed-management 2026-04-07 06:10:55.970265 | orchestrator | 2026-04-07 06:10:55 - clean up floating ips 2026-04-07 06:10:56.006253 | orchestrator | 2026-04-07 06:10:56 - 81.163.192.132 2026-04-07 06:10:56.374343 | orchestrator | 2026-04-07 06:10:56 - clean up routers 2026-04-07 06:10:56.486328 | orchestrator | 2026-04-07 06:10:56 - testbed 2026-04-07 06:10:57.601324 | orchestrator | ok: Runtime: 0:00:17.979241 2026-04-07 06:10:57.605806 | 2026-04-07 06:10:57.606023 | PLAY RECAP 2026-04-07 06:10:57.606158 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-07 06:10:57.606226 | 2026-04-07 06:10:57.744420 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-07 06:10:57.746999 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-07 06:10:58.512853 | 2026-04-07 06:10:58.513054 | PLAY [Cleanup play] 2026-04-07 06:10:58.529163 | 2026-04-07 06:10:58.529297 | TASK [Set cloud fact (Zuul deployment)] 2026-04-07 06:10:58.579959 | orchestrator | ok 2026-04-07 06:10:58.586916 | 2026-04-07 06:10:58.587047 | TASK [Set cloud fact (local deployment)] 2026-04-07 06:10:58.614321 | orchestrator | skipping: Conditional result was False 2026-04-07 06:10:58.639273 | 2026-04-07 06:10:58.639540 | TASK [Clean the cloud environment] 2026-04-07 06:10:59.809493 | orchestrator | 2026-04-07 06:10:59 - clean up servers 2026-04-07 06:11:00.300093 | orchestrator | 2026-04-07 06:11:00 - clean up keypairs 2026-04-07 06:11:00.318199 | orchestrator | 2026-04-07 06:11:00 - wait for servers to be gone 2026-04-07 06:11:00.363108 | orchestrator | 2026-04-07 06:11:00 - clean up ports 2026-04-07 06:11:00.438342 | orchestrator | 2026-04-07 06:11:00 - clean up volumes 2026-04-07 06:11:00.505800 | orchestrator | 2026-04-07 06:11:00 - disconnect routers 2026-04-07 06:11:00.530952 | orchestrator | 2026-04-07 06:11:00 - clean up subnets 2026-04-07 06:11:00.549253 | orchestrator | 2026-04-07 06:11:00 - clean up networks 2026-04-07 06:11:00.710673 | orchestrator | 2026-04-07 06:11:00 - clean up security groups 2026-04-07 06:11:00.750186 | orchestrator | 2026-04-07 06:11:00 - clean up floating ips 2026-04-07 06:11:00.779345 | orchestrator | 2026-04-07 06:11:00 - clean up routers 2026-04-07 06:11:01.178504 | orchestrator | ok: Runtime: 0:00:01.366731 2026-04-07 06:11:01.182474 | 2026-04-07 06:11:01.182658 | PLAY RECAP 2026-04-07 06:11:01.182794 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-07 06:11:01.182947 | 2026-04-07 06:11:01.314315 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-07 06:11:01.315977 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-07 06:11:02.074279 | 2026-04-07 06:11:02.074444 | PLAY [Base post-fetch] 2026-04-07 06:11:02.090059 | 2026-04-07 06:11:02.090201 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-07 06:11:02.156337 | orchestrator | skipping: Conditional result was False 2026-04-07 06:11:02.172217 | 2026-04-07 06:11:02.172500 | TASK [fetch-output : Set log path for single node] 2026-04-07 06:11:02.223069 | orchestrator | ok 2026-04-07 06:11:02.232677 | 2026-04-07 06:11:02.232822 | LOOP [fetch-output : Ensure local output dirs] 2026-04-07 06:11:02.710568 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/d530b3fc0d474923ab0f01e3ee8118aa/work/logs" 2026-04-07 06:11:02.991660 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d530b3fc0d474923ab0f01e3ee8118aa/work/artifacts" 2026-04-07 06:11:03.247607 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d530b3fc0d474923ab0f01e3ee8118aa/work/docs" 2026-04-07 06:11:03.263082 | 2026-04-07 06:11:03.263221 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-07 06:11:04.209370 | orchestrator | changed: .d..t...... ./ 2026-04-07 06:11:04.209742 | orchestrator | changed: All items complete 2026-04-07 06:11:04.209803 | 2026-04-07 06:11:04.953040 | orchestrator | changed: .d..t...... ./ 2026-04-07 06:11:05.684574 | orchestrator | changed: .d..t...... ./ 2026-04-07 06:11:05.713176 | 2026-04-07 06:11:05.713346 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-07 06:11:05.744637 | orchestrator | skipping: Conditional result was False 2026-04-07 06:11:05.747456 | orchestrator | skipping: Conditional result was False 2026-04-07 06:11:05.763014 | 2026-04-07 06:11:05.763142 | PLAY RECAP 2026-04-07 06:11:05.763211 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-07 06:11:05.763244 | 2026-04-07 06:11:05.915811 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-07 06:11:05.916850 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-07 06:11:06.669552 | 2026-04-07 06:11:06.669714 | PLAY [Base post] 2026-04-07 06:11:06.684143 | 2026-04-07 06:11:06.684282 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-07 06:11:07.976116 | orchestrator | changed 2026-04-07 06:11:07.993939 | 2026-04-07 06:11:07.994134 | PLAY RECAP 2026-04-07 06:11:07.994253 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-07 06:11:07.994354 | 2026-04-07 06:11:08.132676 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-07 06:11:08.133715 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-07 06:11:08.935390 | 2026-04-07 06:11:08.935623 | PLAY [Base post-logs] 2026-04-07 06:11:08.946547 | 2026-04-07 06:11:08.946684 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-07 06:11:09.473422 | localhost | changed 2026-04-07 06:11:09.483839 | 2026-04-07 06:11:09.484022 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-07 06:11:09.520141 | localhost | ok 2026-04-07 06:11:09.524254 | 2026-04-07 06:11:09.524389 | TASK [Set zuul-log-path fact] 2026-04-07 06:11:09.555333 | localhost | ok 2026-04-07 06:11:09.565458 | 2026-04-07 06:11:09.565579 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-07 06:11:09.604785 | localhost | ok 2026-04-07 06:11:09.613062 | 2026-04-07 06:11:09.613298 | TASK [upload-logs : Create log directories] 2026-04-07 06:11:10.141866 | localhost | changed 2026-04-07 06:11:10.145473 | 2026-04-07 06:11:10.145630 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-07 06:11:10.647598 | localhost -> localhost | ok: Runtime: 0:00:00.006777 2026-04-07 06:11:10.651765 | 2026-04-07 06:11:10.651920 | TASK [upload-logs : Upload logs to log server] 2026-04-07 06:11:11.259524 | localhost | Output suppressed because no_log was given 2026-04-07 06:11:11.264029 | 2026-04-07 06:11:11.264259 | LOOP [upload-logs : Compress console log and json output] 2026-04-07 06:11:11.325798 | localhost | skipping: Conditional result was False 2026-04-07 06:11:11.330799 | localhost | skipping: Conditional result was False 2026-04-07 06:11:11.337804 | 2026-04-07 06:11:11.338093 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-07 06:11:11.402298 | localhost | skipping: Conditional result was False 2026-04-07 06:11:11.403090 | 2026-04-07 06:11:11.404735 | localhost | skipping: Conditional result was False 2026-04-07 06:11:11.409308 | 2026-04-07 06:11:11.409435 | LOOP [upload-logs : Upload console log and json output]